Rackspace again has a new name. The company says the change isn’t reflective of an identity crisis, but rather to illustrate that it can help organizations satisfy wide-ranging cloud needs.
Now known as Rackspace Technology, the rebranding — the fifth in the company’s 22-year history — aims to demonstrate how it is more than a managed hosting provider, according to its chief solutions officer Matt Stoyka.
“It helps differentiate us while holding onto the Rackspace legacy,” Stoyka said. “It’s a great way to encapsulate our solution mindset.”
As part of its rebranding, Rackspace Technology also this month launched four multi-cloud services designed to give its customers a “comprehensive” way to advance their digital transformation goals, Stoyka said. The four services aren’t entirely new to the company, but Rackspace Technology is now emphasizing them to let potential clients know what they’ve been missing, he said.
“We have these capabilities, but we are working very hard to have more clear messages to customers and put emphasis on the tremendous success we have had,” Stoyka said.
According to Stoyka, the four multi-cloud services are the following:
Cloud Optimization, advisory services that aim to help customers improve cloud costs and performance in a changing market;
Cloud Security, which provides expertise on how to maintain cloud and enterprise compliance;
Data Modernization, services that help companies make the most of AI, machine learning and predictive learning, so they can improve operations and monetize their data, he said.
Rackspace Technology still offers managed public and private cloud services, but companies increasingly want to create a multi-cloud infrastructure to shift workloads and data to specific clouds that match their needs, Stoyka said.
Founded in 1998, the company has been known as Rackspace.com, Rackspace Managed Hosting, Rackspace Hosting and, most recently, Rackspace. The San Antonio company last year purchased Onica, an AWS partner, and in 2018 acquired RelationEdge, a Salesforce consulting partner.
The dozen acquisitions in Rackspace Technology’s history have made the company what it is today, Stoyka said, but the Onica and RelationEdge deals set up the company to focus on advising organizations how to make full use of cloud. Rackspace Technology, he said, is incorporating the best ideas of those two companies’ consultative and advisory practices to inform its strategies.
Commvault reached outside the company for the new leader of its Metallic SaaS-based subsidiary.
Manoj Nair said he will look to expand Metallic’s platform support and take it worldwide in his role as general manager.
Metallic initially launched only in the U.S. in 2019. Alongside with Nair’s appointment as the new GM, Metallic was made available in Canada last week. Nair said he is targeting Western Europe, the Middle East and the Asia-Pacific region, but declined to give a timeline for rolling out Metallic in those areas.
Before taking the top post at Commvault’s SaaS-based subsidiary, Nair was CEO of HyperGrid, which sells SaaS-based hyper-converged infrastructure. Nair has also worked in other areas adjacent to data protection, for Hewlett Packard Enterprise, Dell EMC and RSA Security.
Nair replaced Robert Kaloustian, who stepped down from the position for personal reasons. Kaloustian, a 20-year Commvault veteran, served as SVP of global sales engineering, service and support before taking over Metallic at its 2019 launch.
Metallic launched at the Commvault GO 2019 user conference as a division of Commvault — minus the Commvault branding. The idea was to sell to midmarket customers by offering Commvault software’s backup capabilities through a SaaS-based model hosted on Microsoft Azure. It launched with three products: Core Backup & Recovery, Office 365 Backup & Recovery and Endpoint Backup & Recovery.
Nair said Metallic is seeking good local service providers to partner with and navigating local regulations. He said each country has its own, sometimes conflicting, policies on data. In some places, language is also a consideration. Nair said he plans to approach each regional launch with a regimented process of beta and availability.
“We are getting a lot of demand globally, but we’re taking our time to make sure we’re compliant,” Nair said.
Metallic most directly competes with other cloud-based SaaS backup vendors such as Druva, Clumio and Igneous Systems. Commvault competes with the broader enterprise data protection market, against vendors such as Veritas, Cohesity and Rubrik.
Nair said he is also looking to expand Metallic beyond backup and recovery, and to add a fourth product. He plans to bring Commvault Activate’s analytics capabilities to the platform, and expects to add another product within the next three to six months.
Christophe Bertrand, senior analyst at Enterprise Strategy Group, sees Nair’s appointment as building on the previous GM’s good work. He said it is a logical move from Commvault, as Metallic was designed from the ground up to be partner-focused and is now recruiting partners to build out its channel for a geographical expansion.
“They’re getting a new leader to put some gas into the engine they built,” Bertrand said.
p>The COVID-19 pandemic caused a surge in Microsoft Office 365 adoption and endpoint deployments, and Nair said he’s noticed a corresponding uptick in demand from Metallic customers. It’s all the more reason for Metallic to make its global push now and try to meet that demand. Bertrand said the Commvault of old, before current CEO Sanjay Mirchandani joined, is unlikely to have made changes and pivoted as quickly as it has.
Another difference Bertrand noted about the “new” Commvault is that it is no longer trying to be everything all the time. He said Commvault seems keenly aware that Metallic’s product lineup is fine where it is now, and so the focus should be on selling worldwide while demand is hot. Metallic will need to add workloads over time, but it will be driven by what workloads customers actually adopt.
“Commvault is much more focused and smart about choosing what not to develop now,” Bertrand said.
Zoom has outlined a four-phase plan for implementing end-to-end encryption. But the company will face hurdles as it attempts to add the security protocol to its video conferencing service.
Each phase of the plan will improve security but leave vulnerabilities that Zoom plans to address in the future. However, the company’s draft white paper provides less detail about the later stages of the project.
“This is complex stuff,” said Alan Pelz-Sharpe, founder of research and advisory firm Deep Analysis. “You can’t just plug and play end-to-end encryption.”
Zoom has not said when end-to-end encryption will launch or who will get access to it. At least initially, the service will likely be available only to paid customers.
The goal of the effort is to give users control of the keys used to decrypt their communications. That would prevent Zoom employees from snooping on conversations or from letting law enforcement agencies do the same.
Zoom previously advertised its service as end-to-end encrypted. But in April, the company acknowledged that it wasn’t using the commonly understood definition of that term. The claim has provided fodder for numerous class-action lawsuits.
The first phase of the plan will change Zoom’s security protocol so that users’ clients — not Zoom’s servers — generate encryption keys. The second phase will more securely tie those keys to individual users through partnerships with single sign-on vendors and identity providers.
The third step will give customers an audit trail to verify that neither Zoom nor anyone else is circumventing the system. And the fourth will introduce mechanisms for detecting hacks in real time.
One weakness is the scheme’s reliance on single sign-on vendors and identity providers to match users’ encryption keys to users. That will leave customers that don’t use those services less secure, potentially increasing the risk of meddler-in-the-middle attacks.
Zoom also won’t be able to apply the protocol to all endpoints. Excluded clients include Zoom’s web app and room systems that use SIP or H.323. Zoom also can’t encrypt from end-to-end audio connections made through the public telephone network.
Turning on end-to-end encryption will disable certain features. Users won’t be able to record meetings or, at least initially, join before the host. These limitations are typical of end-to-end encryption schemes for video communications.
Engineers from the messaging and file-sharing service Keybase are leading Zoom’s encryption effort. Zoom acquired Keybase in early May as part of its effort to improve security and privacy.
Zoom released a draft of its encryption plan on GitHub on May 22. The company is accepting public comments on the proposal through June 5. In the meantime, Zoom is urging customers to update their apps by May 30 to get access to a more secure encryption protocol called GCM.
Zoom has been working to repair its reputation after a series of news reports in March revealed numerous security and privacy flaws in its product. An influx of users following the global outbreak of coronavirus put a spotlight on the company.
Users and security experts criticized Zoom for prioritizing ease of use over security. They also faulted the company for not being transparent enough about its encryption and data-sharing practices.
“The criticism was justified and warranted and needed because otherwise these things don’t get fixed,” said Tatu Ylonen, a founder and board member of SSH Communications Security. “I would applaud them for actually taking action fairly quickly.”
More recently, Zoom celebrated some wins. The company settled with the New York attorney general’s office, warding off a further investigation into its security practices. Zoom also got the New York City public school district to undo a ban on the product that had drawn national headlines in April.
But the company will need to do more to win back the trust of some security-minded buyers.
“I think they’ve responded very quickly,” Pelz-Sharpe said. “But if I were advising a compliant company on a product to buy, it probably wouldn’t be on my list.”
Enterprise data governance isn’t just managing the data an organization company possesses, it’s also key to managing the data supply chain, according to Charles Link, director of data and analytics at Covanta.
Link detailed his views on data management during a technology keynote at the Talend Connect 2020 Virtual Summit on May 27. Executives from other Talend customers, including AutoZone, also spoke at the event.
Covanta, based in Morristown, N.J., is in the waste-to-energy business, operating 41 facilities across North America and Europe. Data is at the core of Covanta’s operations as a way to help make business decisions and improve efficiency, Link said.
“We’re never just pushing data; we’re never just handing off the reports,” Link said. “The outcome is not data; it is always a business result.”
Link said he’s often observed that there can be a disconnect between decision-makers and the data that should be used to help make decisions.
To help connect data with decisions, “you really need both the data use and data management strategy to drive business outcomes,” Link said.
Enterprise data governance strategy defined
Link defined data use strategy as identifying business objectives for data and quantifying goals. The process includes key performance indicators to measure the success of data initiatives.
An enterprise data management strategy, on the other hand, is more tactical, defining the methods tools and technologies use to access, analyze, manage and share data, he said.
At Covanta, Link said enterprise data governance is essentially about the need to have what he referred to as data supply chain management.
Charles LinkDirector of data and analytics, Covanta
Link defined data supply chain management as data governance that manages where data comes from and helpsensure consistent quality from a reliable supplier.
For that piece, Covanta has partnered with Talend and is using the Talend Data Fabric, a suite of data integration and management tools that includes a data catalog that helps enable data supply chain management. With Talend as the technology base, Link said that his company has deployed a central hub for users within the organization to find and use trusted data.
“There is now a shared understanding across business and IT of what our data means,” Link said. “So now we trust the quality of the data we use to operate our facilities.”
The chaos of data demands driving AutoZone
For auto parts retailer AutoZone, managing the complexity of data and overcoming data challenges is a foundation of the company’s success, said Jason Vogel, IT manager of data management at AutoZone.
AutoZone has 6,400 stores and each store carries nearly 100,000 parts. In the background, AutoZone is moving data across its disparate data hubs and stores, making it available to the company’s business analysts. Data also helps ensure that AutoZone customers can get the parts they need quickly.
“We have 20 different types of databases — not instances, types,” Vogel emphasized. “We have thousands of instances and Talend serves as the glue to connect all these systems together.”
Vogel noted that AutoZone is looking to expand its real-time data processing so that it can do more in less time, getting parts to its customers faster. The company is also looking to expand operations overall.
“The only way to accomplish that is by moving more data, having more insight into how data is used and accomplishing it all faster,” Vogel said.
Many organizations continue to struggle with data
AutoZone isn’t the only organization that is trying to deal with data coming from many different sources. In another keynote at Talend Connect, Stewart Bond, research director of data integration and data intelligence software at IDC, provided some statistics about the current state of data integration challenges.
Bond cited a 2019 IDC survey of enterprise users’ experience with data integration and integrity that found most organizations are integrating up to six different types of data.
Those data types include transaction, file, object, spatial, internet of things and social data. Furthering adding to the complexity, the same study found that organizations are using up to 10 different data management technologies.
While enterprises are managing a lot of data, Bond said the survey shows that not all the organizations are using the data effectively. Data workers are wasting an average of 15 hours per week on data search, preparation and governance processes, IDC found. To improve efficiency, Bond suggested that organizations better manage and measure how data is used.
“Measurements don’t need to be complex; they can be as simple as measuring how much time people spend on data-oriented activity,” Bond said. “Set a benchmark and see if you can improve over time.”
Improving enterprise data governance with data trust
During her keynote, Talend CEO Christal Bemont emphasized that data quality and trust are keys to making the most efficient use of data.
She noted that it’s important to measure the quality of data, to make sure that organizations are making decisions based on good information. Talend helps its users enable data quality with a trust score for data sources, as part of the Talend Data Fabric.
“When you think about what Talend does, you know, you think of us as an integration company,” Bemont said. “Quite frankly we put equal, and maybe even in some cases more, importance on not only just being able to have a lot of data, but also having complete data.”
By detailing the business challenges of a waste management company, Myles Gilsenan demonstrated the value Oracle Analytics Cloud can give organizations.
Gilsenan, director of Oracle business analytics at Perficient, a consulting firm based in St. Louis that works on digital projects with enterprises, spoke about Oracle Analytics Cloud (OAC) at a breakout session of Oracle Analytics’ annual user conference May 19. The conference, which began on May 12 and has sessions scheduled through August 18, was held online due to the COVID-19 pandemic.
Oracle’s analytics platform had been a patchwork of nearly 20 business intelligence products until June 2019, when the software giant streamlined its BI platform into three products — Oracle Analytics Cloud, Oracle Analytics Server and Oracle Analytics for Applications. Oracle Analytics Cloud is its SaaS offering aimed at business users and featuring natural language generation and other augmented intelligence capabilities to foster ease of use.
It’s a transformation that’s been well received.
“The Oracle Analytics Cloud has enabled Oracle to rapidly play catch-up to some of the incumbents in the analytics space,” said Mike Leone, an analyst at Enterprise Strategy Group. “It provides data-centric organizations with a cloud service anchored in simplicity. While OAC focuses on data visualization and augmented analytics, there’s a lot more under the covers — intelligent automation, recommendations, natural language querying and numerous third-party integrations.”
“Oracle has done a nice job of unifying its strategy and technology across cloud, on-premises and application-integrated deployments with Oracle Analytics Cloud, Oracle Analytics Server and Oracle Analytics for Applications, respectively,” he said. “It’s all one code base.”
Mike LeoneSenior analyst, Enterprise Strategy Group
In addition, he added, the way the platform is packaged gives users flexibility.
“The packing gives them a data model, data integration capabilities, dashboards and reports that are prebuilt for Oracle’s cloud ERP and [healthcare management] apps, yet all of these prebuilt features can be extended to customer-specific data and analysis requirements,” Henschen said. “It’s a way to get started quickly but without being limited to prebuilt content.”
While Oracle Analytics Cloud is designed to be accessible to both technical and non-technical users alike, ironically it’s through one organization’s difficulty getting started that Gilsenan demonstrated what he said are its actual ease of use and capability to quickly deliver value.
Perficient’s client, which he did not name, was a provider of waste management services including waste removal, recycling and renewable energy. One of the company’s main goals when it began using Oracle Analytics Cloud was to join human resources data from Taleo and PeopleSoft, human resources software platforms owned by Oracle.
Specifically, according to Gilsenan, the client wanted greater visibility into such HR metrics as the cost of vacant positions, the time it took to fill vacant positions, quality of hires, employee career progression and talent optimization.
“What they really wanted was to track employees from the recruiting channel all the way through career progression at the company,” he said. “And over time, they wanted to build up a data set to be able to say that people who come through a certain channel turn out to be successful employees, and they would then of course emphasize those channels.”
The company’s data, however, came from disparate systems, including one that was on premises. And when the company started trying to unify its data in Oracle Analytics Cloud, it ran into trouble.
“They had a sense that OAC is an agile, cloud-based environment, and you should be able to get value very quickly,” Gilsenan said. “There were a lot of expectations, and people were expecting to see a dashboard very, very quickly. But there were organizational things that caused issues.”
One of the biggest was that the company’s expert in the subject matter was also working on many other things and didn’t have enough time to devote to the project. Other members of the team working on the project also had competing responsibilities.
As a result, according to Gilsenan, when it started taking longer to complete the project than originally planned, company management concluded that Oracle Analytics Cloud was too complicated.
“When it came to integrating data sources, there was some technical expertise that was needed, but by and large it was the idea that they couldn’t focus,” Gilsenan said. “It was a classic organizational issue.”
Rather than a different analytics platform, what the company really needed was some outside help, according to Glisenan. It brought in Perficient, and within four weeks delivered an HR analytics system in Oracle Analytics Cloud.
Perficient’s first step was to restore the waste management company’s confidence in Oracle Analytics Cloud by showing executives success stories. It then helped the company define success criteria, develop a plan and move into the execution phase.
Perficient helped the waste management company develop a dashboard and six reports that covered critical HR metrics such the quality of hires and the cost of open positions.
“They became very competent in the platform, and right then and there made plans to roll out Oracle Analytics Cloud to the rest of the company [beyond HR],” Gilsenan said.
Focus on HR
While the waste management company is now using Oracle Analytics Cloud throughout its organization, HR has been a particular focus of the platform. Oracle even unveiled a specialized HR version of Oracle Analytics for Cloud HCM at the start of its virtual user conference, though that’s not the tool Perficient’s client is now using.
“Oracle is looking to deliver a more holistic approach to HR analytics,” Leone said. “They’ve spent a ton of time researching various aspects of HR to deliver a comprehensive launching pad for organizations looking to modernize HR with advanced analytics. It’s about using more data from several entities together to help accurately measure success, failure and the likelihood of each. This is where Oracle is making significant strides in helping to modernize analytical approaches.”
Cisco believes CISOs are overwhelmed by too many security products and vendors, and the company introduced a new platform, ominously code-named Thanos, to help enterprises.
But despite being named after the Marvel Comics megavillain, Cisco’s SecureX platformisn’t necessarily designed to wipe out half of all existing security products within enterprise environments. Instead, Cisco is taking a different approach by opening up the platform, which was unveiled last month, and integrating with third parties.
Gee Rittenhouse, senior vice president and general manager of Cisco’s Security Business Group (SBG), said the aim of SecureX is to tie not only Cisco products together, but other vendor offerings as well. “We’ve been working really hard on taking the security problem and reducing it to its simplest form,” he told SearchSecurity at RSA Conference 2020last month.
That isn’t to say that all security products are effective; many “are supposed to have a bigger impact than they actually do,” Rittenhouse said. Nevertheless, the SBG strategy for SecureX is to establish partnerships with third parties and invite them to integrate with the platform, he said, rather than Cisco trying to be everything to everyone. In this interview, Rittenhouse discusses the evolution of SecureX, how Cisco’s security strategy has shifted over the last decade and the company’s plan to change the infosec industry.
Editor’s note:This interview was edited for clarity and length.
How did the idea for SecureX come about?
Gee Rittenhouse: We thought initially if we had a solution for every one of the major threats vectors — email, endpoint, firewalls, cloud, etc. — for one vendor, Cisco, then that would be enough. You buy Cisco networking and you buy Cisco security and that transactional model will simplify the industry. And we realized very quickly that didn’t do anything except put a name on a box. Then the second thing we thought was this: What happens if we take all these different things and integrate the back end together so that when I see a threat on email, I can block on my endpoint? We stitch all this together [via the SecureX framework] on behalf of the customer, and not only does the blocking happen automatically but you also get better protection and higher efficacy. We’d tell people we had an integrated architecture. And the customers would look at us and say ‘Really? I don’t feel that. You’ve got a portal over here, and a portal over there’ and so on. And we’d say, ‘Look, we’ve worked for three years integrating this together and we have the highest efficacy.’ And they’d say, ‘Well, everybody has their numbers …’
About a couple of years ago, we said we’ve simplified the buying model and simplified the back end. Let’s try to simplify the user experience. But you have to be very careful with that. The classic approach is to build a platform, and everyone jumps on the platform and if you only have Cisco stuff, life is great. But, of course, there are other platforms and other products. We wanted to be precise about how we do this, so we picked a particular use case around investigations. It’s an important use case. We built this very simple investigation tool [Cisco Threat Response] that you can think about as the Google search of security. Within five seconds, you can find out that you don’t have [a specific threat] in your environment, or yes, you do and here’s how to block it and respond. The tool had the fastest rate of adoption of any of our products in Cisco’s history. It’s massively successful. More than 8,000 customers use it every day as their investigation tool.
Were you expecting that kind of adoption for Cisco Threat Response?
Rittenhouse: No. We were not. There were two things we weren’t expecting. We weren’t expecting the response in terms in usage. We thought there’d be a few customers using it. The other thing that we didn’t expect was a whole use community came together to, for example, integrate vendor X into the tool and publish the connectors on GitHub. A whole user community has evolved around that platform and extended the capability of it. In both cases, we were quite surprised.
When we saw how that worked, saw the business model, and we understood how people consumed it, we attached it to everything and then said ‘Let’s take the next step’ with analytics and security postures. We asked what a day in the life for security professional was. They’re flooded with noise and threats and alerts. They have to be able to decipher all of that — can the platform do that automatically on their behalf? That’s what we’re doing with SecureX, and the feedback has been super positive
What kind of feedback did you get from customers prior to Cisco Threat Response and SecureX? Did they have an idea of what they wanted?
Gee RittenhouseSVP and GM, Cisco
Rittenhouse: There was a lot of feedback from customers who asked us to make the front end of our portfolio simpler. But what does that actually mean? It was very generic feedback. And in fact, we struggled with the ‘single pane of glass’ approach. What typically happens with that approach is you try to do everything through it, and all of the sudden that portal becomes the slowest part of the portfolio. This actually took a lot of time and a lot of conversations with customers on how they actually work. We engaged a lot of them with design thinking, and Cisco Threat Response was the first thing to come out of those discussions, and then SecureX.
And I want to make the distinction between a platform and a single pane of glass or a portal. And we very much think of SecureX as a platform. And when you think about a platform, it’s usually something that other people can build stuff on top of, so the value to the community is other people’s contributions to it, and you get a multiplier effect. There is only a handful of true, successful platform businesses in the world; it’s very hard to attract that community and achieve that scale.
Like other recent studies, Cisco’s CISO Benchmark Reportshowed that many CISOs feel they have too many security products and are actively trying to reduce the number of vendors they have. Other vendors have talked about this trend and are trying to capitalize on it by becoming a one-stop security shop and pushing out other products. But with SecureX, it sounds like you’re taking a different approach by welcoming third-party vendors to the platform and being more open.
Rittenhouse: We would encourage the industry as a whole to be more open. In fact, the industry is not very open at all. One of the benefits to being open is the ability to integrate. In today’s industry, for example, let’s say you’re a security vendor and your technology says a piece of malware is a threat level 5, and I say it’s a level 2. And you’re integrated into our platform, and you’re freaking out because it’s a level 5. I ask you, ‘Rob, why do you think this? What’s the context around this? Share more.’ And until you have that open interface and integration, I just sit there and say, ‘For some reason, this vendor over here claims it’s big, but we don’t see it'”
So yes, we’re open. And I would anticipate the user experience with Cisco security products integrated together will be very different than what you would get with third parties integrated until they start to share more. And this is one of the issues you see in the SIEM and SOAR markets; they become data repositories for investigations after you get attacked. What actually happened? Let’s go back into the records and figure it out. Because of the data fidelity and the real-time nature [of SecureX] this is something you interact with immediately. It can automatically trace threats and set up workflows and bring in other team members to collaborate because you have that integrated back end.
Cisco has said it’s thebiggest security vendorin the world by revenue, but most businesses probably still associate the company with networking. Now that SecureX has been introduced, what’s the strategy moving forward?
Rittenhouse: We’ve spent a lot of time on the messaging. I think more and more people recognize we’re the biggest enterprise security company. In many ways, our mission is to democratize security like [Duo Security’s] Wendy Nather said, so we want to make it invisible. We don’t want to be sending the message that you have to get this other stuff to be secure. We want it to be built into everything we do.
There’s been a lot of mergers and acquisitions, especially by companies looking to increase their infosec presence. But Wendy talkedduring her keynoteabout simplifying security instead adding product upon product. But it doesn’t sound like you’re feeling the pressure to do that.
Rittenhouse: No. We are not a private equity firm. We buy things for a purpose. And when we buy something, we’ll be happy to tell you why.
Todyl, a New York City company that sells a networking and security platform through MSPs, reported increasing interest in its product as organizations face secure remote access challenges.
“Things have been rapidly evolving over the last two weeks with the COVID-19 response,” Todyl CEO John Nellen said. “We have been really busy trying to help existing partners and new partners.”
The company offers MSPs — and their SMB customers — the ability to consolidate networking and security components into a cloud-based platform. Todyl MSP partners deploy the technology by installing agents on customers’ Windows, Mac, Linux, iOS or Android devices. A VPN tunnel then links customers to Todyl’s Secure Global Network offering, which incorporates web proxy, firewall, content filtering, intrusion detection/prevention (IDP), malware interception and security information and event management (SIEM) technologies.
The Secure Global Network’s points of presence link end customers to multiple network providers. Todyl’s platform connects organizations’ remote workers, data centers, cloud providers, main offices and branch locations, according to the company.
Todyl is currently offering its platform to MSPs for free for 30 days “to help support the immediate need,” Nellen said. Once the offer expires, pricing is device-based with add-on features. Todyl offers pricing for two groups: mobile (Android/iOS) and desktop/laptop/server (Windows, Mac, Linux).
MSP taps Todyl for remote enablement
Infinit Consulting Inc., an MSP based in Campbell, Calif., is selling Todyl as a white-labeled offering. The company has branded Todyl as Infinit Shield Total Defense, which it has paired with its own Infinit Shield security process management platform, according to Jerod Powell, president and founder of Infinit Consulting.
Powell called Todyl “instrumental in helping our customers rapidly enable complete remote workforce capabilities.”
Infinit Consulting had previously enabled nearly all of its customers to use cloud services, but the company is currently tasked with helping them significantly expand remote workforces. The expansion sometimes includes moving customers from having 15% of employees working remotely to nearly 100%.
While assisting with remote workforce expansions, Infinit Consulting has run into issues such as licensing and hardware limitations around customers’ previous remote work applications, Powell said. He pointed to another issue: Properly securing devices to ensure data integrity, company policy adherence and security, while allowing employees to work remotely — often from their personal home PC or Mac.
Powell said Todyl lets Infinit Consulting enable remote access in a matter of a few hours in a full-scale deployment. The Todyl offering also lets the company “secure that remote connection 100%, end to end;” bring clients onto the Secure Global Network; and feed data back to the SIEM. The SIEM feature provides the MSP with “the telemetry needed to identify potential security risks [and] enforce corporate policy just as if [remote employees] were on the client’s LAN.”
John NellenCEO, Todyl
He said Todyl also offers IDP and advanced threat protection scanning to flag potentially malicious applications and data before they reach customers.
The demand for supporting customers’ remote workforces is “extremely high,” Powell noted. He cited a case in which Infinit Consulting rolled out Todyl to a customer that needed to enable more than 500 users to work remotely. The customer’s previous remote work product only supported 100 users. Todyl also identified security issues in several of remote workers’ home PCs. The MSP was able to resolve those issues before admitting the remote workers’ devices onto the network, he added.
Powell said his company has created deployment packages for Todyl that can implement the product in an automated manner.
Waves of demand for secure remote access
Citing conversations with Todyl MSP partners, Nellen said MSPs anticipate two waves of unfolding demand for remote work technology.
The first wave consists of early adopters trying to quickly set up their organizations for newly distributed workforces. The second wave will comprise SMBs that have yet to determine the best way to support remote workers. Those companies will start making decisions, based on guidance from government agencies, in the coming weeks, Nellen said.
“They are expecting this not to be just a single shot, but something that is taking place and evolving over time,” Nellen said.
VMware hopes a raft of new Kubernetes-based enhancements can position the company as the right choice for customers interested in container migration while they retain investments in vSphere.
The strategy centers on Tanzu, a product portfolio VMware introduced at the VMworld conference in August. A chief component is the Kubernetes Grid, a distribution of the container orchestration engine that sets up clusters in a consistent way across various public clouds and on-premises infrastructure.
Another product, Tanzu Mission Control, provides management tooling for Kubernetes clusters. VMware has also pushed its acquisition of Bitnami under the Tanzu header. Bitnami, which offers a catalog of pre-packaged software such as the MySQL database for quick deployment across multiple environments, is now called Tanzu Application Catalog.
Finally, VMware has rebranded Pivotal Application Service to Tanzu Application Service and changed its Wavefront monitoring software’s name to Tanzu Observability by Wavefront.
This flurry of product development and marketing around Kubernetes has a critical purpose for VMware.
“Kubernetes has practically stolen virtualization from VMware, so now it needs to upgrade the engine room, while keeping the promenade deck the same and hoping the passengers stay on board and do not jump ship,” said Holger Mueller, an analyst at Constellation Research.
A big part of this plan involves the new vSphere 7, which has been reworked to run both container and virtual machine workloads by embedding Tanzu Kubernetes Grid and other components. This vSphere option is initially available only through VMware Cloud Foundation 4, which is supported on AWS, Azure, Google, Oracle, Rackspace and IBM’s public cloud services, as well as through other VMware partners.
VMware also plans to release a separate, Kubernetes-less edition of vSphere 7 for customers who don’t want that functionality. Tanzu Kubernetes Grid, Application Catalog and Mission Control are available now, while Cloud Foundation 4 and vSphere 7 are slated for release before May 1.
Users gravitate towards containers
VMware’s announcements further confirm the industrywide trend of users moving away from their core virtualization platforms and more seriously exploring container migration. With VMware the longtime industry leader in virtualization, the announcements carry added weight.
“There is a transition happening in compute technology of what is being used to deliver the apps that is moving away from virtualization to containers — not that virtualization isn’t useful for other things,” said Gary Chen, IDC’s research director of software-defined compute. “VMware is trying to make that transition, and they appear to be pretty serious about it.”
VMware’s efforts around Kubernetes stem back a few years. It previously offered Pivotal Container Service as an add-on to its core platform, and acquired a batch of Kubernetes talent and related IP through its purchase of Heptio in 2018. Two of the three original authors of Kubernetes now work at VMware.
“At the end of the day, Kubernetes is still an orchestration tool for automating containers, but what if you are not in a developer group?” said Brian Kirsch, an IT architect and instructor at Milwaukee Area Technical College. “What they are introducing here is for people writing their own software and moving toward containers, but will there be enough support on the back end for those not ready for Kubernetes or containers, or who may never need them? We support 45,000 students here, but we still buy our software and don’t write it.”
Many companies in large vertical markets, such as manufacturing and healthcare, are often slow to move to another DevOps environment once they have settled on a product. Traditionally, many applications in those markets aren’t updated often by the vendors and it can be a monumental task to pursue container migration, even for long-time vSphere users.
“Up until just a few years ago, some of the larger EHR apps were still in VB [Microsoft’s Visual Basic] for the front end,” Kirsch said. “It just takes time.”
While VMware executives tout that Cloud Foundation and vSphere products can work on competitors’ cloud platforms, Kirsch said he thinks the company is overplaying the importance of that capability.
“Writing an app once and have it run wherever you want is good for some, but I don’t know that many people who want to hop around that much,” Kirsch said. “My question is: How many times have you left your cloud provider unless it goes belly up? A lot of work is involved with this and no matter how transparent it is, it’s almost never like just flipping a switch,” he said.
Controlling the control plane
Some analysts see the VMware announcements around container migration as counterpunching the competitive efforts of IBM-Red Hat and others to gain a firm grasp of the management software piece of both the cloud and on-premises applications.
“If Red Hat succeeded in commoditizing the enterprise OS space, making RHEL and Windows Server the two de facto standards, then the next layer to be commoditized is the control plane, which I still believe to be the PaaS layer,” said Geoff Woollacott, senior strategy consultant and principal analyst at Technology Business Review. “Right now, the main rivals for that are VMware with this announcement, Azure and OpenShift.”
The U.S. Air Force is in the midst of evaluating multiple Kubernetes distributions and management tools, including Red Hat OpenShift, Rancher and a beta version of Tanzu Kubernetes Grid. The various IT teams within the military branch can use whichever Kubernetes platform they choose. For the Air Force’s purposes, the latest Red Hat OpenShift versions will beat VMware to the punch in disconnected and Kubernetes edge environments, along with real-time operating system support that the Air Force will use in F16 fighter jets. The Air Force will also wait until all of VMware’s Tanzu product line becomes generally available before it commits to using it, and carefully watch how VMware brings together its new business units and their products.
“VMware is checking all the boxes, but details matter,” said Nicolas Chaillan, the Air Force’s chief software officer, and co-lead for the Enterprise DevSecOps Initiative in the office of the Department of Defense CIO. “With mergers, there are always people leaving, conflicts, and you never know what’s going to happen.”
However, VMware retains its lead in server virtualization, and the Kubernetes IP and expertise the company has assembled with its Heptio acquisition and Pivotal merger can’t be overlooked, Chaillan added.
Nicolas ChaillanChief software officer, Air Force
“The vSphere piece, and the ability to tie that back to Kubernetes, is very interesting, and that alone could win the market,” he said. “A lot of companies in finance and healthcare still need a virtualization stack on premises, and otherwise would have to use Google Anthos, Azure Stack or Amazon Outposts — or they could go through vSphere, and have a single company that brings [them] the whole thing.”
Redesigning the crown jewels
VSphere 7.0, formerly called Project Pacific, has been significantly redesigned, according to Krishna Prasad, vice president and general manager of VMware’s Cloud Platform Business. A large part of that redesigning was to tightly integrate Kubernetes into vSphere. One advantage of this for corporate users is when they stand up a cluster based on the company’s ESX Server virtualization layer, those become Kubernetes clusters along with the company’s vCenter control plane, Prasad said.
“When we started rearchitecting, it wasn’t driven by the need to accommodate Kubernetes workloads — that was just one of the driving factors,” Prasad said. “We realized it [Kubernetes] was a foundational piece we could bring into vSphere at the platform level that would enhance the platform itself. It would make the platform more modern like Kubernetes itself.”
Another important consideration for the redesign was a direct response to what the company’s core customers were asking for: to be able to deliver their infrastructure to their developers through a cloud consumption model.
“They want and we want to deliver infrastructure completely as code,” Prasad said.
To this end, VMware also unveiled an improved version of NSX-T that now offers full-stack networking and security services that connect and protect both VMs and containers.
“With the enhancements to NSX-T, as you deploy Kubernetes workloads it automates everything right through to the Kubernetes UI,” Prasad said. “This is about writing infrastructure as code and automating the whole deployment instead of bringing in your own components. We think it is a critical part of delivering Kubernetes with full automation.”
Senior News Writer Beth Pariseau contributed to this report.
In Cisco’s sweeping certification changes, the company eliminated prerequisite exams for the Cisco Certified Network Professional tracks, which means network engineers have a higher bar to meet when they take CCNP exams.
However, this higher bar doesn’t mean engineers must solely know advanced topics and technologies, such as software-defined WAN, automation and programmability — although those are on the exams. Instead, CCNP hopefuls on the Enterprise track — for ENCOR 350-401, in particular — should expect to know a solid amount of past CCNP material, such as IP routing essentials, in addition to new technologies. The same goes for Cisco Certified Internetwork Expert (CCIE) hopefuls, as well.
CCNP and CCIE hopefuls alike can explore old and new material in CCNP and CCIE ENCOR 350-401 Official Cert Guide, available now, by authors Ramiro Garza Rios, David Hucaby, Brad Edgeworth and Jason Gooley. This guidebook delves into topics that span from forwarding to wireless to software-defined networking best practices.
Below is the “Do I Know This Already?” quiz from Chapter 6, “IP Routing Essentials.” These 10 questions explore common routing protocols network engineers will likely recognize from their daily jobs and others that are also relevant to their positions. Edgeworth said the chapter covers fundamentals and helps readers understand how routers function and think.
The quiz offers readers a vendor-agnostic studying method, as routing protocols aren’t specific to Cisco or any other vendor. These universal fundamentals can help readers in their careers wherever they go and with whichever vendor products they may use.
These questions for the CCNP and CCIE ENCOR 350-401 help readers review enterprise networking essentials they need to know and test their expertise on key protocol differences and common routing concepts. The quiz covers a general overview of the protocols and dives deep into path selection, static routing, and virtual routing and forwarding.