SAN FRANCISCO — VMware is working on a version of NSX for public clouds that departs from the way the technology manages software-based networks in private data centers.
In an interview this week with a small group of reporters, Andrew Lambeth, an engineering fellow in VMware’s network and security business unit, said the computing architectures in public clouds require a new form of NSX networking.
“In general, it’s much more important in those environments to be much more in tune with what’s happening with the application,” he said. “It’s not interesting to try to configure [software] at as low a level as we had done in the data center.”
Four or five layers up the software stack, cloud provider frameworks typically have hooks to the way applications communicate with each other, Lambeth told reporters at VMware’s RADIO research and development conference. “That’s sort of the level where you’d look to integrate in the future.”
Todd Pugh, IT director at Sugar Creek Packing Co., based in Washington Court House, Ohio, said it’s possible for NSX to use Layer 7 — the application layer — to manage communications between cloud applications.
“If we burst something to the cloud on something besides AWS, the applications are going to have to know how to talk to one another, as opposed to just being extensions of the network,” Pugh said.
Today, VMware is focusing its cloud strategy on the company’s partnership with cloud provider AWS. The access VMware has to Amazon’s infrastructure makes it possible for NSX to operate the same on the cloud platform as it does in a private data center. Companies use NSX to deliver network services and security to applications running on VMware’s virtualization software.
Pugh would not expect an application-centric version of NSX to be as user-friendly as NSX on AWS. Therefore, he would prefer to have VMware strike a similar partnership with Microsoft Azure, which would give him the option of using the current version of NSX on either of the two largest cloud providers.
“I can shop at that point and still make it appear as if it’s my network and not have to change my applications to accommodate moving them to a different cloud,” Pugh said.
Nevertheless, having a version of NSX for any cloud provider would be useful to many companies, said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo.
“If VMware can open up the platform a bit to allow their customers to have a uniform network management model across any IaaS environment, that will simplify engineering and operations tremendously for companies that are embracing multi-cloud and hybrid cloud,” McGillicuddy said.
VMware customers can expect the vendor to roll out the new version of NSX over the next year or so, Lambeth said. He declined to give further details.
Rethinking NSX networking
Andrew Lambethengineering fellow at VMware
VMware will have to prepare NSX networking, not just for multiple cloud environments, but also the internet of things, which introduces other challenges to network management and security.
“More lately, I’ve been sort of taking a step back and figuring out what’s next,” Lambeth said. “I feel like the platform for NSX is kind of in a similar situation to where ESX and vSphere were in 2006 and 2007. Major pieces were kind of there, but there was a lot of buildout left.”
VSphere is the brand name for VMware’s suite of server virtualization products. ESX was the former name of VMware’s hypervisor.
The immediate focus of the three vendors is on the growing number of companies moving workloads to public clouds. Synergy Research Group estimated cloud-based infrastructure providers saw their revenue rise by an average of 51% in the first quarter to $15 billion. The full-year growth rate was 44% in 2017 and 50% in 2016.
VMware made significant changes to NSX-T 2.0, released in November, adding native support for microsegmentation and containers, and will follow up shortly with NSX 2.1, expected by February 2018.
VMware brings many different but extremely useful tools to the table in these NSX-T releases. From a large infrastructure point of view, they provide more features and flexibility to design the network the way administrators and security teams want them to be designed.
There are two versions of NSX: NSX for vSphere and NSX-T. NSX for vSphere is more widely deployed; NSX T focuses on multi-hypervisor and cloud-native environments. In this article, we’ll look at what’s different between NSX-T 1.0 and 2.1.
NSX-T updates ease firewall management
The major news in VMware NSX-T’s latest versions is support for microsegmentation. Micro-segmentation is a big deal because it provides a new security paradigm for the cloud and large-scale environments.
Historically, firewalls were mostly a north-south proposition — i.e., inbound and outbound traffic from the rest of the internet/network. With NSX, firewall rules can be applied to individual VMs, groups of VMs and many other scenarios that were once difficult or not possible.
IT departments typically devote about 8% of their budgets to perimeter security. These rules also follow the VM as it moves around. In short, it helps harden the interior infrastructure that’s usually the soft spot for attackers. Think about it as a hard exterior shell with a soft inner shell. It changes the game. No other cloud vendor has anything like this.
Now that everything is automated, you can easily implement firewall rules and manage them all centrally. Alongside this new firewall is the distributed network encryption between VMs/containers — when all items are within the same virtual domain, of course. Again, this functionality helps stop things like network eavesdropping by undesirables.
The complexity is without a doubt the overriding issue. Manually managing VMs in a massive environment becomes complex, if not unfeasible. With NSX, the network traffic and associated management for east-west traffic can be implemented more easily. It used to be quite complex to implement firewalls at the VM level, but not anymore.
VMware adds container support
Other big news in VMware NSX-T 2.x is native support for containers. This was a critical addition due to the undeniable ownership of containerization by the Docker-based infrastructure.
Along with VMware doubling down on BOSH/Pivotal as an orchestration platform, version 2.1 supports both Pivotal Cloud Foundry and Pivotal Container Service.
Extend on premises to the cloud
These developments feed into NSX Cloud, one of the VMware Cloud Services the company rolled out at VMworld in August 2017. NSX Cloud provides consistent networking and security for applications running in multiple private and public clouds via a single management console and common API. This is interesting, as this is a service no one else offers. It allows NSX to be expanded beyond the local borders of the infrastructure and allows the NSX domain to be expanded beyond the local network into major cloud providers. In other words, it expands on premises into the cloud. AWS is already supported and Azure support is on the roadmap. It brings such functionality as discovery.
Added content packs ease troubleshooting
Alongside this is the inclusion of Log Insight. Log Insight, as the name suggests, collects or logs key information from the NSX environment. “Great,” you might say. “So what?” Content packs are the answer. Content packs are add-ins that can be included in Log Insight and they help drill down and troubleshoot problems within the NSX environment. Don’t forget that we are talking about your network here; it may be virtual, but it’s still critical.
New VMware NSX-T load balancing feature
Finally, one major thing that came in 2.1 was NSX load balancing. Over time, it’s clear that many other features will be added to help NSX reach or exceed feature parity with other software load balancers.
What makes it even better is that VMware is very much pushing an API first environment. Infrastructure in code is where it’s at. The revised 2.0/2.1 API has been heavily reworked, making features easier to consume and access.
VMware has updated its version of NSX for non-vSphere environments, adding to the network virtualization software integration with the Pivotal Container Service and the latest iteration of Pivotal Cloud Foundry.
VMware introduced NSX-T 2.1 on Tuesday. Through NSX-T, Pivotal Container Service, or PKS, brings support for Kubernetes container clusters to vSphere, VMware’s virtualization platform for the data center. PCF is an open source cloud platform as a service (PaaS) that developers use to build, deploy, run and scale applications.
VMware developed the Cloud Foundry service that is the basis for PCF. Pivotal Software, whose parent company is Dell Technologies, now owns the PaaS, which Pivotal licenses under Apache 2.0.
VMware NSX-T was introduced early this year to provide networking and security management for non-vSphere application frameworks, OpenStack environments, and multiple KVM distributions.
Support for KVM underscores VMware’s recognition that the virtualization layer in Linux is a force in cloud environments. As a result, the vendor has to provide integration with vSphere for VMware to extend its technology beyond the data center.
Kubernetes cluster support in VMware NSX-T
VMware NSX-T integration with PKS is significant because of the extensive use of Kubernetes in public, private and hybrid cloud environments. Kubernetes, which Google developed, is used to automate the deployment, scaling, maintenance, and operation of multiple Linux-based containers across clusters of nodes. Google, VMware and Pivotal developed PKS.
VMware has said it plans to add Docker support in NSX-T. Docker is another popular open source software platform for application containers.
VMware NSX-T is a piece of the vendor’s strategy for spreading its technology across the branch, WAN, cloud computing environments, and security and networking in the data center. Essential to its networking plans is the acquisition of SD-WAN vendor VeloCloud, which VMware plans to complete by early next year.
VMware expects to use VeloCloud to take NSX into the branch and the WAN. “What VeloCloud offers is really NSX everywhere,” VMware CEO Pat Gelsinger told analysts last week, according to a transcript published by the financial site Seeking Alpha.
Gelsinger held the conference call after the company released earnings for the fiscal third quarter ended Nov. 3. VMware reported revenue of $1.98 billion, an increase of 11% over the same period last year. Net income grew to $443 million from $319 million a year ago.
The two biggest public cloud providers have set their sights on VMware workloads, though they’re taking different approaches to accommodate the hypervisor heavyweight and its customers.
A little over a year after Amazon Web Services (AWS) and VMware pledged to build a joint offering to bridge customers’ public and private environments, Microsoft this week introduced a similar service for its Azure public cloud. There’s one important distinction, however: VMware is out of the equation, a hostile move met with equal hostility from VMware, which said it would not support the service.
Azure Migrate offers multiple ways to get on-premises VMware workloads to Microsoft’s public cloud. Customers now can move VMware-based applications to Azure with a free tool to assess their environments, map out dependencies and migrate using Azure Site Recovery. Once there, customers can optimize workloads for Azure via cost management tools Microsoft acquired from Cloudyn.
This approach eschews the VMware virtualization and adapts these applications into a more cloud-friendly architecture that can use a range of other Azure services. A multitude of third-party vendors offer similar capabilities. It’s the other part of the Azure migration service that has drawn the ire of VMware.
VMware virtualization on Azure is a bare-metal subset of Azure Migrate that can run a full VMware stack on Azure hardware. It’s expected to be generally available sometime next year. This offering is a partnership with unnamed VMware-certified partners and VMware-certified hardware, but it notably cuts VMware out of the process, and out of the revenue stream.
In response, VMware criticized Microsoft characterization of the Azure migration service as part of a transition to public cloud. In a blog post, Ajay Patel, VMware senior vice president, cited the lack of joint engineering between VMware and Microsoft and said the company won’t recommend or support the product.
This isn’t the first time these two companies have butted heads. Microsoft launched Hyper-V almost a decade ago with similar aggressive tactics to pull companies off VMware’s hypervisor, said Steve Herrod, who was CTO at VMware at the time. Herrod is currently managing director at venture capital firm General Catalyst.
Part of the motivation here could be Microsoft posturing either to negotiate a future deal with VMware or to ensure it doesn’t lose out on these types of migration, Herrod said. And of course, if VMware had its way, its software stack would be on all the major clouds, he added.
VMware on AWS, which became generally available in late August, is operated by VMware, and through the company’s Cloud Foundation program ports its software-defined data centers to CenturyLink, Fujitsu, IBM Cloud, NTT Communications, OVH and Rackspace. The two glaring holes in that swath of partnerships are Azure and Google Cloud, widely considered to be the second and third most popular public clouds behind AWS.
Companies have a mix of applications, some of which are well-suited to transition to the cloud, and others must stay inside a private data center or can’t be re-architected for the cloud. Hence, a hybrid cloud strategy has become an attractive option, and VMware’s recent partnerships have made companies feel more comfortable with the public cloud and curb the management of their own data centers.
“I talk to a lot of CIOs and they love the fact that they can buy VMware and now feel VMware has given them the all-clear to being in the cloud,” Herrod said. “It’s purely promise that they’re not locked into running VMware in their own data center that has caused them to double down on VMware.”
Jeff Katoanalyst, Taneja Group
VMware virtualization on Azure is also an acknowledgement that some applications are not good candidates for the cloud-native approach, said Jeff Kato, an analyst at Taneja Group in Hopkinton, Mass.
“The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS,” he said.
Some customers will prefer VMware on AWS, but it won’t be a thundering majority, said Carl Brooks, an analyst at 451 Research. There’s also no downside for Microsoft to support what customers already do, and the technical aspect of this move is relatively trivial, he added.
“It’s a buyer’s market, and none of the major vendors are going to benefit from trying to narrow user options — quite the opposite,” Brooks said.
Perhaps it’s no coincidence that Microsoft debuted the Azure migration service in the days leading up to AWS’ major user conference, re:Invent, where there is expected to be more talk about the partnership between Amazon and VMware. It’s also notable that AWS is only a public cloud provider, so it doesn’t have the same level of competitive friction as there has been historically between Microsoft and VMware, Kato said.
“Microsoft [is] trying to ride this Azure momentum to take more than their fair share of [the on-premises space], and in order to do that, they’re going to have to come up with a counter attack to VMware on AWS,” he said.
Despite VMware’s lack of support for the Azure migration service, it’s unlikely it can do anything to stop it, especially if it’s on certified hardware, Kato said. Perhaps VMware could somehow interfere with how well the VMware stack integrates with native Azure services, but big enterprises could prevent that, at least for their own environments.
“If the customer is big enough, they’ll force them to work together,” Kato said.
Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at firstname.lastname@example.org.
VMware plans to acquire SD-WAN vendor VeloCloud Networks, a move that would turn the branch office into a battleground for the virtualization provider and Cisco.
The VeloCloud-VMware acquisition, announced this week, would be carried out in early February. With VeloCloud, VMware would go head-to-head against Cisco’s Viptela, IWAN and Meraki brands. SD-WAN, in general, intelligently routes branch traffic across multiple links, such as broadband, MPLS and LTE.
“This is the first time that Cisco and VMware will directly compete in the networking world,” said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo.
Before, the closest Cisco and VMware came to competing in networking was with their software-defined networking platforms ACI and NSX, respectively. The products, however, serve mostly different purposes in the data center. NSX provisions network services within VMware’s virtualized computing environments while ACI distributes application-centric policies to Cisco switches.
VMware SDN marches to the branch
The VeloCloud-VMware acquisition, however, marks the start of taking NSX to the branch, where Cisco is already offering ACI. Both vendors are also working on extending their respective SDN platforms to enterprise software running on public clouds.
In the branch, VMware plans to provide SD-WAN, security, routing and other services on an NSX-based network overlay that’s hardware agnostic. Rather than supply branch appliances for VeloCloud software, VMware wants customers to buy certified hardware from different vendors.
“That is certainly our longer-term vision for this. That it will be a pure software play,” said Rajiv Ramaswami, COO of cloud services at VMware, during a conference call with reporters and analysts.
In the short term, VMware would support appliances sold by VeloCloud, Ramaswami said. VMware’s parent company, Dell EMC, also sells hardware for VeloCloud software.
While VMware shies away from hardware, Cisco has delivered centralized software that provisions network services to the branch through a new line of routers, called the Catalyst 9000s. In the future, Cisco could also provide a software-only option through the Enterprise Network Functions Virtualization platform (ENFV) the company introduced last year. ENFV would run on Cisco servers or third-party certified hardware.
“Cisco is making multiple bets in SD-WAN,” McGillicuddy said.
Cloud orchestration a key piece of VeloCloud-VMware acquisition
VMware is banking on VeloCloud’s cloud-based network orchestration tools to evolve into a significant differentiator from Cisco and other WAN infrastructure providers. VMware could eventually use the technology to orchestrate network services in the branch and the cloud, Ramaswami said.
VMware’s ambitions do not alter the fact that it has a difficult road ahead battling Cisco. The latter company dominates the networking market with more than 150,000 paying customers for its WAN products, according to Gartner. VMware is the largest supplier of data center virtualization, but is a newbie in networking.
VeloCloud’s roughly 1,000 customers include service providers, as well as enterprises. AT&T, Deutsche Telekom, Sprint, Vonage and Windstream are examples of carriers that offer the company’s SD-WAN product as a service.
VMware sells network virtualization software to service providers and expects VeloCloud to help grow that relatively small business. “VeloCloud and their deep relationship with the service provider community is a huge route to a market accelerator,” said Peder Ulander, a vice president of strategy at VMware.
VMware did not release financial details of the acquisition.
For much of its 19-year history VMware has been stubbornly proprietary in its approach to technology. But over the past year, the company has softened its hard-core stance, and is more receptive to embrace open source.
Behind this reformation is Dirk Hohndel, VMware’s vice president and chief open source officer. Since his arrival a little over a year ago, Hohndel has convinced the company to pour more financial and human resources into internal and external open source projects, and work more closely with industry organizations such as the Linux Foundation, of which he is a board member.
Hohndel sat down with senior executive editor Ed Scannell to discuss the evolved VMware open source approach, its relationship with corporate and third-party open source developers, and the future of containers and OpenStack.
Take me through the thinking behind the evolution of VMware open source container products since Project Photon’s introduction two years ago.
Dirk Hohndel: After Photon we released the vSphere Integrated Containers solutions, which was a way to deploy containers as workloads in a vSphere environment. But it didn’t deal with the question of orchestration or provide a larger framework on top of it.
One of the key questions we asked ourselves: In the broader spectrum of customers changing the way they deploy and develop applications in-house, what is the best way to provide them with an environment in which IT basically gets out of the way? A way in which developers get the environment they need to create container-based applications?
After some exploration we concluded that joining forces with Pivotal was the best solution in terms of helping customers get what they were asking for. So we created the Pivotal Container Service. By running Kubernetes on top of Bosh, on top of vSphere, gives them the APIs a developer is looking for.
What do you mean by ‘getting IT out of the way?’
Hohndel: Developers looking at container-based applications often say: ‘I just want to create applications and not have to worry about infrastructure.’ What customers are looking for and what the app developers are looking for is to have an infrastructure that to them looks the same whether they deploy it on their laptops, in their testing environments, to a public cloud or a private cloud. They don’t want to worry about how this is implemented on the back end.
Dirk Hohndelchief open source officer, VMware
We are trying to provide an environment where developers get to use the APIs in the environment they are comfortable with, and get the IT people to provide it in a way that fits into their architecture that is scalable and secure and deals with the complexities of networking.
What does Pivotal’s technology give to your users and developers they didn’t have before?
Hohndel: Pivotal’s involvement is very much around this integration through Bosh and Kubo and into IaaS. If you look at Cloud Foundry, there is a very opinionated, very tightly managed vision of a cloud-native architecture. [Pivotal Container Service] is a more flexible, broader environment that doesn’t give you the carefully selected environment with services. But it gives you the same solid underpinning as provided by Bosh and vSphere and the flexibility of Kubernetes on top.
That is fundamentally why Google is involved in this project. To them Kubernetes is the interface they promote as the way to orchestrate containers both on premises and through the Google Cloud Engine, and then into the public cloud.
How much open source code is contributed by outside developers to projects like Project Photon versus VMware’s internal development?
Hohndel: It depends on the project. Open vSwitch, for example, is driven by many different contributors besides VMware. We actually moved that project to the Linux Foundation and it’s no longer a VMware-hosted project.
We have other projects where the number of outside contributions is smaller because of the specificity of the project, or its complexity. We have a fun little tool created in-house for our own development purposes called Chap. It analyzes un-instrumented core files, either live cores or cores from crashes, for leaks and memory usage and corruption. Contributions from the outside have been limited because it’s a narrow, intensely complicated developer tool.
How many VMware programmers contribute code to the community? Do you plan to increase that number of programmers?
Hohndel: Good question, because it raises the question of how should a software company actually interact with the open source community. We use a ton of open source projects for every product we do in some way, shape or form. When we fix bugs we contribute those back to the community. But we must balance that with the creation of our own open source products. Many of them are internal tools like Clarity or Chap, or they are related to products like VIC [vSphere Integrated Containers].
We have a couple of teams in each business unit focused on these tools and components. I’m sure that adds up to a few dozen, maybe 100 engineers who are working on open source projects. Then, there are the other teams working specifically on upstream projects because they are part of our products: think OpenStack, think Kubernetes and the next [Linux] kernel or other broad projects being used like GTK+ in the UI space. This is an area where I am actively hiring people for our cloud-native business unit.
What is the level of acceptance for OpenStack among your users? Some weeks it feels like it’s dying, others it seems to have a future.
Hohndel: I keep trying to grasp why people think it is dying. There are certain market segments where OpenStack is thriving and doing well. Certainly telcos seem to have coalesced around the OpenStack APIs.
A lot of companies that explore enterprise production environments based on OpenStack discover there is a clear tradeoff between Capex and Opex. So you can get OpenStack as an open source project for free, or you can use one of the available OpenStack distributions and use your capital expenditure. It does come at a significant operational cost. It is a very complex piece of software — it is actually many pieces of interdependent software — so setting it up and getting it to run on day one and especially on day two is very complex.
What is the long-range trajectory of OpenStack?
Hohndel: I don’t think anyone has a clear answer yet. In the telco sector, it is absolutely what people are looking for. In the enterprise we see some users that are happy and some that are disillusioned. I think the jury is out. We’re on the Gartner Hype Curve with OpenStack where Gartner calls the falling edge towards the trough of disillusionment [laughs]. So there is the peak of irrational expectations followed by the trough of disillusionment. And then at the end of the cycle you have the plateau of productivity.
Are there any open source synergies as you go forward with the VMware-AWS deal? AWS is an open source shop internally, but they sell a lot of proprietary software.
Hohndel: The technology they use for their services is certainly open source like their database service or their search service, but the underlying technical infrastructure we interact with, that is fairly proprietary technology. To me in the vSphere on AWS environment, I don’t think open source really has been a key player. It becomes much more interesting if you look at the services that have been provided on top of that. There, we are certainly collaborating with them on some of the same projects.
Ed Scannell is a senior executive editor with TechTarget. Contact him at email@example.com.
CenturyLink and Rackspace will target VMware customers with recently released managed private cloud services based on VMware Cloud Foundation.
Companies are gradually becoming more comfortable with the public cloud, but many remain uneasy about keeping their data isolated from other tenants, said Amy DeCarlo, principal analyst at GlobalData in London. A VMware Cloud Foundation managed private cloud may be the logical next step for existing VMware customers to reduce the demands of day-to-day data center management.
“They still want some barriers and separations. And this gives them a sense that it is walled-off and separated, [and] that it is running in a dedicated environment,” she said.
However, it might not be the best bet for companies that want to reduce reliance on VMware, or for companies that don’t have VMware in their long-term cloud strategy — especially those that want to go all-in with the public cloud.
“They could wait to see if there are developments in another platform that makes it more appealing,” DeCarlo said.
For Rackspace — a co-creator of a competing managed private cloud service, OpenStack — the new service reflects a strategy to support customers on multiple platforms. For many years, the company used VMware and offered managed support for Amazon Web Services.
OpenStack is still important to Rackspace, but the company recognizes this is an era where it must support a multicloud environment, said Peter Fitzgibbon, vice president and general manager of the company’s VMware practice.
Like many VMware customers, ShoreTel Inc. runs applications on premises, but wants to reduce use of its own data centers. The company uses VMware tools, and its staff has deep knowledge of the VMware platform, said Zachary Webb, a platform architect at the telecommunications provider, based in Sunnyvale, Calif. A VMware managed private cloud is a first step for the carrier to move from its data center to the public cloud.
About two years ago, Webb started to push ShoreTel to move its IT infrastructure from a leased colocation data center managed with VMware software to a managed private cloud. Today, the split is 60% to 40% between Rackspace’s managed private cloud and ShoreTel’s own data centers. The carrier’s international locations live in a Rackspace data center. ShoreTel will continue to migrate workloads to the VMware managed private cloud at Rackspace, as hardware comes up for a refresh.
CenturyLink’s new VMware managed private cloud, CenturyLink Dedicated Cloud Compute Foundation, rearchitects its flagship private cloud onto Hewlett Packard Enterprise (HPE) hardware. It is cheaper and 50% faster to provision than its predecessor, which required multiple integration points across network, compute and virtualization from five vendors. That’s typical with many private clouds that require users to coordinate technologies, either within OpenStack or earlier versions of VMware, said David Shacochis, vice president of product management at CenturyLink.
VMware Cloud Foundation serves up an integrated stack with vSphere, NSX and vSAN, which means fewer moving pieces, improved self-service features and security control, Shacochis said. The VMware private cloud targets modernized versions of line-of-business applications, enterprise resource planning and back-office system applications that could go to the public cloud, but for which companies want more control.
“They would look at a flexible private cloud with an easy user experience as a nice tradeoff to public cloud,” Shacochis said.
Both these managed private clouds based on VMware Cloud Foundation are available from Rackspace and CenturyLink data centers, with plans to extend them to customers’ data centers or colocation data centers.
LAS VEGAS — As VMware intensifies its focus on security and enabling support for multiple clouds, VMware’s NSX networking software continues to grow in importance, and now underpins many of the company’s upcoming initiatives.
At VMworld here this week, Pat Gelsinger, VMware CEO, said NSX will serve a number of key functions to tie together multiple clouds from VMware, Amazon Web Services (AWS) and its network of business partners, as well as segment out selected capabilities of monolithic applications to share across multiple clouds.
“With micro-segmentation, we can extend out NSX to serve as the connective tissue to IoT devices,” Gelsinger told TechTarget after his keynote. “There are a ton of IoT devices out there responsible for collecting, storing and sharing critical information that are unprotected and this is a way to provide them with more security.
“It will be the secret sauce behind all of what we do — it is that important,” he said.
Tom Hull, CTO with the Moffitt Cancer Center, agrees with that assessment about the strategic importance of NSX networking. Moffitt is updating its disaster recovery (DR) architecture built around proprietary server hardware and software, and has gravitated toward a software-defined networking (SDN) approach, where NSX will play a key role.
Pat GelsingerCEO, VMware
Moffitt’s largest use of NSX will be to gain more freedom for the center’s research domain without affecting the security of its clinical information. “Instead of a monolithic DR in a remote location that gets spun up once a year and tested for audit purposes, we can bring it into our active environment so DR looks more like business continuity,” Hull said.
VMware underscored its commitment to NSX with Expanded VMware NSX to support networking and security for both clouds and cloud-native applications. The new support is intended to help NSX administrators manage and troubleshoot larger-scale NSX deployments.
The company also introduced VMware NSX Cloud, a service designed to offer more consistent networking and security for applications running in multiple private and public clouds, through a single management console and common API. The new offering is supposed to simplify and help scale operations, improve standardization and compliance and lower OPEX for applications that run in public clouds. A micro-segmentation security policy can be defined just once and applied to workloads that run across multiple clouds, according to company officials.
The State of Louisiana’s Division of Administration/OTS plans to use the VMware Cloud on AWS, released this week, to better leverage NSX to extend to the public cloud across a common operating environment, said Michael Allison, the division’s CTO. This will give his organization “public cloud agility and economics” with a more proven virtualized infrastructure, he said.
One systems engineer with a large telecommunications company in Hayward, Calif. is evaluating NSX, but he said he is concerned about the costs to replace his older proprietary server hardware, as well as older remote devices that would be connected to NSX.
“[NSX] is good technology, but to take full advantage of it, I’d have to replace many of the local servers and upgrade the quality of my network and the remote devices we use to collect data and monitor traffic,” he said.
LAS VEGAS — The much anticipated VMware Cloud on AWS is finally available. For potential users, now comes the hard part.
The service that brings the leading private cloud provider’s environments to the leading public cloud provider’s platform has generated a lot of buzz, but the lack of details has kept many potential users on the fence. Important information about pricing and capabilities were disclosed Monday here at VMworld, and now VMware customers must decide if it’s worth it to make the leap.
“I don’t think the customer interest is fully baked,” said David Lucky, director of product management for Datapipe, a managed cloud services provider in Jersey City, N.J. that works closely with AWS and VMware. “But it’s getting a lot of attention from our customers.”
Part of the allure of this deal is the ability to put VMware environments next to AWS services such as DynamoDB and RDS. There are fast, private networks connecting the two services, but there are still functionality limitations.
“It’s separate really,” Lucky said. “It’s got its own portal; its own billing and pricing. You do link your AWS account into it and connect it, but I could see there’s a lot more opportunity to build on that.”
Pricing for the VMware-sold product is complex, and deviates in some important ways from the standard AWS model. Purchases are made on a per-host basis, and can be billed by the hour, or in reserved capacity on one- and three-year contracts.
The three-year contract costs $109,366 per host, which would save about 50% compared to the on-demand hourly billing rate, according to VMware. Another program can cut costs by up 25% based on their on-premises VMware product licenses, as long as those on-premises products remain active.
There are separate charges for IP and data transfers, as the standard AWS egress fees still apply. Each host has 2 CPUs, 36 cores, 72 hyper-threads, 512 GiB RAM and local flash storage.
If a company goes with the three-year contract, the estimated total cost of ownership for VMware Cloud on AWS is up to $0.09 per VM per hour, according to VMware. That’s comparable to native cloud instances costs and a savings of up to $0.08 cheaper per VM per hour than the traditional on-premises set up.
Stay or go?
Whether the move is worth the cost will depend on an organization’s in-house environments — those that are less efficient or bloated are the best candidates, said Kyle Hilgendorf, a Gartner analyst.
Erik Anderson, a senior network engineer at a Midwest healthcare company, said his team works entirely on-premises, but is looking at the public cloud to localize workloads in other parts of the globe. Where those workloads will land will depend on cost and other factors, but those decisions won’t be made any time soon, he said.
“If it turns out the stuff that VMware and AWS is doing reduces operational expenses and administrative headaches, that would be the ideal choice” Anderson said.
The service is built on bare metal, and VMware will carve out capacity within AWS data centers to then provide scalable infrastructure to its customers. It’s the first time bare metal has been sold on AWS and VMware’s SSD architecture is different from AWS’, but executives for both companies don’t foresee capacity issues beyond what users typically find when requesting resources on AWS.
For customers, adding VMware Cloud capacity as part of the service will be no different than any of the other instance types they sell, said AWS CEO Andy Jassy.
The service may even accelerate adoption among companies that already have a footprint in both environments, said Peter Scott, COO of DivvyCloud, a multicloud automation and management company in Arlington, Va., that is among the partner ecosystem for VMware Cloud on AWS.
IT shops, however, are wary to move some workloads to the public cloud that are built on a different operating model and aren’t easily or flexibly scalable, he said.
“You’re essentially taking a whole lot of legacy workloads and sticking them in public cloud, which is ephemeral and by its very definition is very different,” Scott said. “If you’re going to take this stuff and put it in the public cloud that runs 24/7, 365 days a year, you’d be better off back in your data center.”
There are limitations to the new capabilities. Customers can bring applications back and forth, but they will still have to pay the standard AWS egress fees. Amazon doesn’t charge customers to bring data into the cloud, but the cost to pull data out is prohibitive for most users, and is a main reason the public cloud is criticized for workload lock in. Also, the VMware Cloud on AWS service is currently limited to the AWS U.S. West (Oregon) region, and won’t be available in other regions until 2018.
AWS and VMware executives said this is just the first step in the partnership, and though they didn’t provide specifics about future services, they listed tighter integration and migration assistance as items to improve.
“I definitely sense Amazon sees a lot of opportunity and investing more of their time going forward,” Datapipe’s Lucky said.
AWS and VMware executives went out of their way to characterize the partnership as more than just marketing, and observers say the product is surprisingly mature, despite the early limitations and the lengthy wait to bring to market.
And though the deal has publicly discussed for nine months, the actual product release culminates a shifted cloud strategy for both companies. AWS was once borderline dismissive about the future of hybrid cloud, and VMware initially sought to build its own public cloud to usurp AWS and keep everything within its own ecosystem. Officials for both companies, however, effusively praised each other and cited huge potential to extend these capabilities to thousands of customers in the years ahead.
And now that some of the critical information about the service is public – particular the pricing – customers will ultimately decide if the adoption will meet the hype.
“Without knowing the price, how attractive it is is relative, and we got a lot of questions about that,” Lucky said. “Now at least it’s out there so the conversation can move past that.”
Keith Townsend, blogging in The CTO Advisor, took a look at the VMware business model ahead of VMworld 2017.
Townsend said he disagrees with other analysts who view VMware as decreasing in relevance. According to Townsend, VMware vCloud Air didn’t meet market demand, leaving many administrators accustomed to vSphere seeking greater API-level access and self-service for developers.
However, he said VMware has since reset, partnering with Amazon to provide VMware cloud in Amazon Web Services. Nevertheless, he added that Microsoft Azure is able to compete with VMware Cloud on AWS, with on-premises offerings, as well as good support and integration.
Townsend also explored the VMware business model from the standpoint of network and security. While he said VMware has done a good job hiring networking experts and debugging NSX microsegmentation deployments, the vendor still faces challenges from Cisco. He complained that VMware lacks a public strategy around cloud-native systems and doesn’t directly address cloud-native hurdles, like serverless networking or platform as a service.
Read more of Townsend’s thoughts on VMware.
What makes a cybersecurity vendor enterprise-class?
Jon Oltsik, an analyst with Enterprise Strategy Group Inc. in Milford, Mass., explored the list of vendors considered to be enterprise-class. Oltsik identified IBM, Symantec, McAfee and Cisco as the top vendors. Recent ESG research asked 176 cybersecurity professionals about the characteristics of enterprise-class vendors. Among respondents, 35% said the most important factor was a strong understanding of enterprise business processes and regulations.
Oltsik said enterprise-class vendors will serve as both technology and industry leaders, offering strong communities, cybersecurity education, open standards, research and abundant services.
“This blueprint for enterprise-class cybersecurity vendors won’t be easy to build, as it will take shrewd leadership, ample resources, and a firm organizational commitment to get there,” Oltsik wrote. “Nevertheless, I firmly believe that at least one vendor will separate itself from the pack. Winners have the opportunity to reap rich financial rewards and make a true difference,” he added.
Dig deeper into Oltsik’s thoughts on cybersecurity vendors.
The costs of hardware and disaggregation
Ivan Pepelnjak, blogging in ipSpace, said networking vendors prefer selling software-hardware bundles while pretending they’re terrific hardware. In the early evolution of networking, software needed to be tightly coupled to hardware. But since Cisco PIX — an early network address translation and firewall appliance launched in 2008 — Pepelnjak said that software has run on commodity hardware.
“The ‘real’ reason networking vendors continue to use this charade is probably the habits and psychology of selling networking gear: Customers believe they’re buying unicorn-based expensive hardware, whereas in fact they’re really buying the zillions of man-years invested in software development,” he wrote.
Pepelnjak joked that vendors are moving away from a hardware-driven sales model at “glacial speeds.” He suggested that network professionals should focus less on the expense of hardware and more on total cost of ownership.
Explore more of Pepelnjak’s thoughts on cost of ownership.