Tag Archives: Cloud

Chip bugs hit cloud computing usage less than first feared

In the aftermath of one of the largest compute vulnerability disclosures in years, it turns out that cloud computing usage won’t suffer greatly after all.

Public clouds were potentially among the most imperiled architectures from the Spectre and Meltdown chip vulnerabilities. But at least from the initial patches, the impact to these platforms’ security and performance appears to be less dire than predicted.

Many industry observers expressed concern that these chip-level vulnerabilities would make the multitenant cloud model a conspicuous target for hackers to gain access to data in other users’ accounts on the same shared host. But major cloud vendors’ quick responses – in some cases months ago — have largely addressed those issues.

Customers must still update systems that live on top of the cloud, but with the underlying patches, cloud environments are well-positioned to address the initial concerns about data theft. And cloud customers have far less to do than a company that owns its own data center and needs to update its hardware, microcode, hypervisor and perhaps management instances.

“The sky is not falling; just relax,” said Chris Gardner, an analyst with Forrester Research. “They’re probably the most critical CPU bugs we’ve seen in quite some time, but the mitigations help and the chip manufacturers are already working on long-term solutions.”

In some ways, vendors’ rapid response to install fixes to the Meltdown and Spectre vulnerabilities also illustrates their centralization and automation chops.

“We couldn’t have worked with hardware vendors and open source projects like Linux at the pace they were able to do to patch project,” said Joe Kinsella, CTO and founder of CloudHealth, a cloud managed service provider in Boston. “The end result is a testament to the centralization of ability to actually go and respond.”

Security experts say there are no known exploits in the wild for the Meltdown and the two-pronged Spectre vulnerabilities. The execution of a hack through these vulnerabilities, especially Spectre, is beyond the scope of the average hacker, who is far more likely to find a path of less resistance, they say.

In fact, the real impact from Meltdown and Spectre vulnerabilities so far has been the patching process itself. Microsoft, in particular, riled some of its Azure customers with forced, unscheduled reboots, after reports about Meltdown and Spectre surfaced before the embargo on the disclosure was to be lifted. Google, for its part, said it avoided reboots by live migrating all its customers.

And while Amazon Web Services (AWS), Microsoft, Google and others could quietly get ahead of the problem to varying degrees, smaller cloud companies were often left scrambling.

AMD and Intel have worked on firmware updates to further mitigate the problem, but early versions of these have caused issues of their own.  Updated patches are supposedly imminent, but it’s unclear if they will require another round of cloud provider reboots.

The initial patches to Meltdown and Spectre are stopgap measures — it may take years to redesign chips in a way that doesn’t rely on speculative execution, an optimization technique at the root of these vulnerabilities. It’s also possible that any fundamental redesign of these chips could ultimately benefit cloud vendors, which swap out hardware more frequently than traditional enterprises and thus could jump on the new processors faster.

I can’t imagine a chief risk officer or chief security officer saying this is inconsequential to what we’re going to do in the future.
Marty PuranikCEO, Atlantic.Net

These flaws could cause potential customers to rein in their cloud computing usage, or do additional due diligence before they transition out of their own data centers. This is particularly true in the financial sector and other heavily regulated industries that have just begun to warm to the public cloud.

“If you [are] starting a new project, there’s this question mark that wasn’t there before,” said Marty Puranik, CEO of Atlantic.Net, a cloud hosting provider in Orlando, Fla. “I can’t imagine a chief risk officer or chief security officer saying this is inconsequential to what we’re going to do in the future.”

Performance hits not as bad as first predicted

The other potential fallout from Spectre and Meltdown is how the patches will impact performance. Initial predictions were up to a 30% slowdown, and frustrated customers took to the Internet to highlight major performance hits. Cloud vendors have pushed back on those estimates, however, and multiple managed service providers that oversee thousands of servers on behalf of their clients said that the vast majority of workloads were unaffected.

While it remains to be seen if performance issues will start to emerge over time, IT pros seem to corroborate the providers’ claims. More than a dozen sources — many of whom requested anonymity because of the situation’s sensitive and fluid nature — told SearchCloudComputing that they saw almost no impact from the patches.

The reality is that the number of impacted systems is fairly small and the performance impact is highly variable, said Kinsella. “If it was really 30% I think we’d be having a different conversation because that’s like rolling back a couple years of Moore’s Law,” he said.

Zendesk, based in San Francisco, suspected something was up with its cloud environment following an uptick in reboot notices from AWS toward the end of 2017, said Steve Loyd, vice president of technology operations at Zendesk. Those reboots weren’t exactly welcome, but were better than the alternative, and the company hasn’t seen a big impact from testing patches so far, he said.

Google said it has seen no reports of notable impacts for its cloud customers, while Microsoft and AWS initially said they expected a minority of customers to see performance degradation. It’s unclear how Microsoft has mitigated these issues for those customers, though it has recommended customers switch to a faster networking service that just became generally available. AWS said in a statement that, since installing its patches, it has worked with impacted customers to optimize workloads and “in almost every case, prevent significant changes to their cost.”

The biggest potential exception to these negligible impacts on cloud computing usage would be anything that uses the OS kernel extensively, such as distributed databases or caching systems. Of course, the same type of workload on premises would presumably face the same problem, but even a small impact adds up at scale.

“If any single system doesn’t appear to have more than 1% impact, it’s almost immeasurable,” said Eric Wright, chief evangelist at Turbonomic, a Boston-based hybrid cloud management provider. “But if you have that across 100 systems, you have to add one new virtual system to your load, so no matter how you slice it, there’s some kind of impact.”

Cloud providers also could take more of a hit with customers simply because of their pricing schemes. A company that owns its own data center could just throw some underused servers at the problem. But cloud vendors charge based on CPU cycles, and slower workloads there could have a more pronounced impact, said Pete Lindstrom, an analyst at IDC.

“It’s impressionistic stuff but that’s how security works,” he said. “Really, the question will be what does the monthly bill look like, and is the impact actually there?”

The biggest beneficiary from performance impacts could be abstracted services, such as serverless or platform as a service products. In those scenarios, all the patches are the responsibility of the provider and analysts believe that, to the customer, these services will appear unaltered.

ACI Information Group, a news and social media aggregator, patched its AWS EC2 instances, base AMIs and Docker images. So far the company hasn’t noticed any huge issues, but employees did take note that its serverless workloads required no work on their part to address the problem and the performance was unaffected, said Chris Moyer, vice president of technology at ACI and a TechTarget contributor.

“We have about 40% of our workload on serverless now, so that’s a big win for us too, and another reason to complete our migration entirely,” he said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

AWS Cloud9 IDE threatens Microsoft developer base

As cloud platform providers battle for supremacy, they’ve trained their sights on developers to expand adoption of their services.

A top issue now for leading cloud platforms is to make them as developer-friendly as possible to attract new developers, as both Microsoft and Amazon Web Services have done. For instance, at its re:Invent 2017 conference last month, the company launched AWS Cloud9 IDE, a cloud-based integrated development environment that can be accessed through any web browser. That fills in a key missing piece for AWS as it competes with other cloud providers — an integrated environment to write, run and debug code.

“AWS finally has provided a ‘living room’ for developers with its Cloud9 IDE,” said Holger Mueller, an analyst at Constellation Research in San Francisco. That fills a void for AWS as it competes with other cloud providers — especially Microsoft, which continues to extend its longtime strengths of developer tools and relationships with the developer community into the cloud era.

Indeed, for developers that have grown up in the Microsoft Visual Studio IDE ecosystem, Microsoft Azure is a logical choice as the two have been optimized for one another. However, not all developers use Visual Studio, so cloud providers must deliver an open set of services to attract developers. Now, having integrated the Cloud9 technology it acquired last year as the Cloud9 IDE, AWS has an optimized developer platform of its own.

AWS Cloud9 IDE adoption 

“There is no doubt we will use it,” said Chris Wegmann, managing director of the Accenture AWS Business Group at Accenture. “We’ve used lots of native tooling. There have been gaps in the app dev tooling for a while, but some third parties, like Cloud9, have filled those gaps in the past. Now it is part of the mothership.”

Forrester analyst Michael FacemireMichael Facemire

With the Cloud9 IDE, AWS offers developers an IDE experience focused on their cloud versus having them use their top competitor’s IDE with an AWS-focused toolkit, said Rhett Dillingham, an analyst at Moor Insights & Strategy in Austin, Texas.

“[They] are now providing an IDE with strong AWS service integration, for example, for building serverless apps with Lambda, as they build out its feature set with real-time paired-programming and direct terminal access for AWS CLI [command-line interface] use,” he said.

That integration is key to lure developers away from their familiar development environments.

“When I saw the news about the Cloud9 IDE I said that’s great, there’s another competitor in this market,” said Justin Rupp, systems and cloud architect at GlobalGiving, a crowdfunding organization in Washington, D.C. Rupp uses Microsoft’s popular Visual Studio Code tool, also known as VS Code, a lightweight code editor for Windows, Linux and macOS.

The challenge for AWS is to attract developers that already like the tool they’re using, and that’ll be a tall order, said Michael Facemire, an analyst at Forrester Research in Cambridge, Mass. “I’m a developer myself and I’m not giving up VS Code,” he said.

That’s been the knock against AWS, that they provide lots of cool functionality, but no tooling. This starts to address that big knock.
Michael Facemireanalyst, Forrester Research

For now, Cloud9 IDE is a “beachhead” for AWS to present something for developers today, and build it up over time, Facemire said. For example, to tweak a Lambda function, a developer could just pull up the cloud editor that Amazon provides right there live, he said.

“That’s been the knock against AWS, that they provide lots of cool functionality, but no tooling,” Facemire said. “This starts to address that big knock.”

Who is more developer-friendly?

AWS’ reputation is that it’s not the most developer-friendly cloud platform from a tooling perspective, which hardcore, professional developers don’t require. But as AWS has grown and expanded, it’s become friendlier to the rest of the developer community because of its sheer volume and consumability. And the AWS Cloud9 IDE appeals to developers that fit in between the low-code set and the hardcore pros, said Mark Nunnikhoven, vice president of cloud research at Dallas-based Trend Micro.

“The Cloud9 tool set is firmly in the middle, where you’ve got some great visualization, you’ve got some great collaboration features, and it’s really going to open it up for more people to be able to build on the AWS cloud platform,” he said.

Despite providing a new IDE to its developer base, AWS must do more to win their complete loyalty.

AWS Cloud9 IDE supports JavaScript, Python, PHP and more, but does not have first-class Java support, which is surprising given how many developers use Java. Secondly, Amazon chose to not use the open source Language Server Protocol (LSP), said Mike Milinkovich, executive director of the Eclipse Foundation, which has provided the Eclipse Che web-based development environment since 2014. Eclipse Che supports Java and has provided containerized developer workspaces for almost two years.

AWS will eventually implement Java support, but it will have to do it themselves from scratch, he said. Instead, if they had participated in the LSP ecosystem, they could have had Java support today based on the Eclipse LSP4J project, the same codebase with which Microsoft provides Java support for VS Code, he said.

This proprietary approach to developer tools is out of touch with industry best practices, Milinkovich said. “Cloud9 may provide a productivity boost for AWS developers, but it will not be the open source solution that the industry is looking for,” he said.

Constellation Research’s Mueller agreed, and noted that in some ways AWS is trying to out-Microsoft Microsoft.

“It’s very early days for AWS Cloud9 IDE, and AWS has to work on the value proposition,” he said. “But, like you have to use Visual Studio for Azure to be fully productive, the same story will repeat for Cloud9 in a few years.”

Microsoft scoops up NAS vendor Avere for hybrid cloud services

Microsoft moved to bolster its cloud storage capabilities with the acquisition of NAS vendor Avere Systems, giving it a high-performance file system to manage unstructured data in hybrid clouds.

The Pittsburgh-based NAS vendor Avere’s OS file system is incorporated in FXT Edge filers in all-flash or spinning disk versions for on-premises or hybrid cloud configurations. Avere also provides a virtual appliance, the Virtual FXT Edge filers, which are available for Amazon Web Services (AWS) and the Google Cloud Platform. 

The terms of the deal were not disclosed.

Microsoft disclosed the acquisition in a blog post on its website but declined an interview request to provide more details about its plans for the cloud NAS vendor. Microsoft acquired early cloud NAS vendor StorSimple in 2012, and gives that technology to Azure subscribers to tier data into the cloud.

However, Avere CEO Ron Bianchini wrote in a company blog post that the two companies’ “shared vision” is to use Avere technology “in the data center, in the cloud and in hybrid cloud storage …” while tightly integrating it with Azure.

“Avere and Microsoft recognize that there are many ways for enterprises to leverage data center resources and the cloud,” Bianchini wrote. “Our shared vision is to continue our focus on all of Avere’s use cases — in the data center, in the cloud and in hybrid cloud storage and cloud bursting environments. Tighter integration with Azure will result in a much more seamless experience for our customers.”

Avere was founded in 2008 as a company that focused on the data center with its FXT Core Filers that used flash to accelerate network-attached storage (NAS) performance on disk systems. The company later transitioned to the cloud with its Avere FXT Edge Filers that served as NAS public clouds, allowing customers to connect on-premises storage to AWS, Google Cloud and Azure services.

In addition to NFS and SMB protocols, the Avere Cloud NAS appliance supports object storage from IBM Cleversafe, Western Digital, SwiftStack and others through its C2N Cloud-Core NAS platform.

The only other vendor that offers end-to-end is Oracle. But Oracle does not have a global namespace. Avere has a global namespace.
Marc Staimerfounder, Dragon Slayer Consulting

The NAS vendor also sells FlashCloud, which runs on FXT Edge Filers with object APIs to connect to public and private clouds. The systems can be clustered so that cloud-based NAS can scale on premises while also providing high-availability access to data in the cloud. Customers can use FlashCloud software as a file system for object storage and move data to the cloud without requiring a gateway.

“They provide a true NAS filer,” said Marc Staimer, founder of Dragon Slayer Consulting. “They provide a complete, end-to-end package. The only other vendor that offers end-to-end is Oracle. But Oracle does not have a global namespace. Avere has a global namespace.”

Avere founders Bianchini, CTO Michael Kazar and technical director Daniel Nydick came from NetApp, which acquired their previous company Spinnaker Networks in 2004 for its clustered NAS technology.

Some of Avere’s customers include Sony Pictures’ Imageworks, animation studio Illumination Mac Guff, the Library of Congress, Johns Hopkins University and Teradyne Inc. The company is private so it does not disclose revenue, but a source close to the vendor put its bookings at $7 million in the fourth quarter of 2016 and $22 million for the year. Those bookings were up from $4.8 million in the fourth quarter and $14.5 million in 2015.

In March of 2017, Google became an Avere investor during the company’s $14 million Series E funding round. Avere raised about $100 million in total funding. Previous investors include Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners, Tenaya Capital and Western Digital Technologies.

The Avere team will continue to work out of its Pittsburgh office for Microsoft.

Microsoft to acquire Avere Systems, accelerating high-performance computing innovation for media and entertainment industry and beyond – The Official Microsoft Blog

The cloud is providing the foundation for the digital economy, changing how organizations produce, market and monetize their products and services. Whether it’s building animations and special effects for the next blockbuster movie or discovering new treatments for life-threatening diseases, the need for high-performance storage and the flexibility to store and process data where it makes the most sense for the business is critically important.

Over the years, Microsoft has made a number of investments to provide our customers with the most flexible, secure and scalable storage solutions in the marketplace. Today, I am pleased to share that Microsoft has signed an agreement to acquire Avere Systems, a leading provider of high-performance NFS and SMB file-based storage for Linux and Windows clients running in cloud, hybrid and on-premises environments.

Avere logoAvere uses an innovative combination of file system and caching technologies to support the performance requirements for customers who run large-scale compute workloads. In the media and entertainment industry, Avere has worked with global brands including Sony Pictures Imageworks, animation studio Illumination Mac Guff and Moving Picture Company (MPC) to decrease production time and lower costs in a world where innovation and time to market is more critical than ever.

High performance computing needs however do not stop there. Customers in life sciences, education, oil and gas, financial services, manufacturing and more are increasingly looking for these types of solutions to help transform their businesses. The Library of Congress, John Hopkins University and Teradyne, a developer and supplier of automatic test equipment for the semiconductor industry, are great examples where Avere has helped scale datacenter performance and capacity, and optimize infrastructure placement.

By bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure. We are excited to welcome Avere to Microsoft, and look forward to the impact their technology and the team will have on Azure and the customer experience.

You can also read a blog post from Ronald Bianchini Jr., president and CEO of Avere Systems, here.

Tags: Avere Systems, Azure, Big Computing, Cloud, High-Performance Computing, high-performance storage

Top cloud providers dominate headlines in 2017

It’s no surprise that top cloud providers, Amazon Web Services, Microsoft Azure and Google, continued to dominate technology headlines in 2017. This year, we saw these cloud giants perform the same one-upmanship around tools, services and prices that we have in the past — but this time, with a sharper focus on technologies such as containers and hybrid cloud.

Before you head into 2018, refresh your memory of SearchCloudComputing’s top news from the past year:

Amazon, Microsoft crave more machine learning in the cloud

All the top cloud providers see the importance in machine learning, and Amazon Web Services and Microsoft Azure put their differences aside in October to jointly create Gluon, an open source deep learning interface based on Apache MXNet. This new library is intended to make AI technologies more accessible to developers and help them more easily create machine learning models. In the future, Gluon will work worth Microsoft Cognitive Toolkit.

Meanwhile, Google Cloud Platform offers TensorFlow, another open source library for machine learning. While TensorFlow is a formidable opponent, some developers shy away from it due to its complexities.

The main problem that all providers face in this space is that the public cloud isn’t always the best environment for complex machine learning workloads due to cost, data gravity or a lack of skill. Some data scientists continue to use the public cloud to test, but then run the workloads on premises.

Google hybrid cloud strategy crystallizes with Nutanix deal

While cloud is popular, many workloads are still kept on premises — either due to their design or compliance issues. Top cloud providers continue to seek partnerships to target the hybrid market and ease the gap between data centers and the cloud.

The Amazon and VMware deal tends to be the most common example of this. But in June 2017, Google partnered with Nutanix to fuel its own hybrid efforts. Next year, customers will be able to manage and deploy workloads between the Google public cloud and their own hyper-converged infrastructure from a single interface. This partnership will also extends Google cloud services, such as BigQuery, to Nutanix customers, and enable customers to use Nutanix boxes as edge devices.

Kubernetes on Azure hints at hybrid cloud endgame

One of containers’ main advantages is enhanced portability between cloud platforms — a feature that’s especially attractive to hybrid cloud users. In February 2017, Microsoft unveiled the general availability of Kubernetes on Azure Container Service (AKS, formerly ACS), making it the first public cloud provider to support all the major container orchestration engines: Kubernetes, Mesosphere’s DC/O and Docker Swarm.

The move was one that could especially benefit hybrid cloud users because both Docker Swarm and Kubernetes enable teams to manage containers that run on multiple platforms from a single location. In October, Azure rolled out a new managed Kubernetes service, and rebranded ACS as AKS. AWS countered in November with Amazon Elastic Container Service for Kubernetes, a managed service.

Azure migration takes hostile approach to lure VMware apps

To compete with VMware Cloud on AWS, Microsoft released a similar service for Azure in November 2017 — without VMware support.

Azure Migrate enables enterprises to analyze their on-premises environment, discover dependencies and more easily migrate VMware workloads into the Azure public cloud. A bare-metal subset of the service, VMware virtualization on Azure, is expected to be available in 2018, and enables users to run a VMware stack on top of Azure hardware. While the service is based on a partnership with unnamed VMware partners, and involves VMware-certified hardware, the development of it didn’t directly involve VMware itself, and cuts the vendor out of potential revenues. VMware has since said that it will not recommend or support the product.

Cloud pricing models reignite IaaS provider feud

The price war continued in 2017, but top cloud providers changed their tune: instead of direct cuts, they altered their pricing models. AWS abandoned its per-hour billing, in favor of per-second billing, to counter per-minute billing from Google and Azure. Google shortly responded with its own shift to a per-second billing model.

Microsoft, for its part, added a Reserved VM Instances option to Azure, which provides discounts to customers that purchase compute capacity in advance for a one- or three-year period. The move was a most direct shot at AWS’ Elastic Compute Cloud Reserved Instances, which follow a similar model.

Google Cloud Platform services engage corporate IT

Google continues to pitch its public cloud as a hub for next-generation applications, but in 2017, the company took concrete steps to woo traditional corporations that haven’t made that leap.

Google Cloud Platform services still lag behind Amazon Web Services (AWS) and Microsoft Azure, and Google’s lack of experience with enterprise IT is still seen as GCP’s biggest weakness. But the company made important moves this year to address that market’s needs, with several updates around hybrid cloud, simplified migration and customer support.

The shift to attract more than just the startup crowd has steadily progressed since the hire of Diane Greene in 2015. In 2017, her initiatives bore their first fruit.

Google expanded its Customer Reliability Engineering program to help new customers — mostly large corporations — model their architectures after Google’s. The company also added tiered support services for technical and advisory assistance.

Other security features included Google Cloud Key Management Service and the Titan chip, which takes security down to the silicon. Dedicated Interconnect taps directly into Google’s network for consistent and secure performance. Several updates and additions highlighted Google’s networking capabilities, which it sees as an advantage over other platforms, such as a slower and cheaper networking tier Google claims is still on par with the competition’s best results for IT shops.

Google Cloud Platform services also expanded into hybrid cloud through separate partnerships with Cisco and Nutanix, with products from each partnership expected to be available in 2018. The Cisco deal involves a collection of products for cloud-native workloads and will lean heavily on open source projects Kubernetes and Istio. The Nutanix deal is closer to the VMware on AWS offering as a lift-and-shift bridge between the two environments.

And for those companies that want to move large amounts of data from their private data centers to the cloud, Google added its own version of AWS’ popular Snowball device. Transfer Appliance is a shippable server that can be used to transfer up to 1 TB of compressed data to Google cloud data centers.

In many ways, GCP is where Microsoft Azure was around mid-2014, as it tried to frame its cloud approach and put together a cohesive strategy, said Deepak Mohan, an analyst with IDC.

The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.
Deepak Mohananalyst, IDC

“They don’t have the existing [enterprise] strength that Microsoft did, and they don’t have that accumulated size that AWS does,” he said. “The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.”

To help strengthen its enterprise IT story, Google infused its relatively small partner ecosystem — a critical piece to help customers navigate the myriad low- and high-level services — through partnerships forged with companies such as SAP, Pivotal and Rackspace. Though still not in the league of AWS or Azure, Google also has stockpiled some enterprise customers of its own, such as Home Depot, Coca-Cola and HSBC, to help sell its platform to that market. And it also hired former Intel data center executive Diane Bryant as COO in November.

GCP also more than doubled its global footprint, with new regions in Northern Virginia, Singapore, Sydney, London, Germany, Brazil and India.

gcp services
Google Cloud Platform services

Price and features still matter for Google

Price is no longer the first selling point for Google Cloud Platform services, but it remained a big part of the company’s cloud story in 2017. Google continued to drop prices across various services, and it added a Committed Use Discount for customers that purchase a certain monthly capacity for one to three years. Those discounts were particularly targeted at large corporations, which prefer to plan ahead with spending when possible.

There were plenty of technological innovations in 2017, as well. Google Cloud Platform was the first to use Intel’s next-gen Skylake processors, and several more instance types were built with GPUs. The company also added features to BigQuery, one of its most popular services, and improved its interoperability with other Google Cloud Platform services.

Cloud Spanner, which sprang from an internal Google tool, addresses challenges with database applications on a global scale that require high availability. It provides the consistency of transactional relational databases with the distributed, horizontal scaling associated with NoSQL databases. Cloud Spanner may be too advanced for most companies, but it made enough waves that Microsoft soon followed with its Cosmos DB offering, and AWS upgraded its Aurora and DynamoDB services.

That illustrates another hallmark of 2017 for Google’s cloud platform: On several fronts, the company’s cloud provider competitors came around to Google’s way of thinking. Kubernetes, the open source tool spun out of Google in 2014, became the de facto standard in container orchestration. Microsoft came out with its own managed Kubernetes service this year, and AWS did the same in late November — much to the delight of its users.

Machine learning, another area into which Google has pushed headlong for the past several years, also came to the forefront, as Microsoft and Amazon launched — and heavily emphasized — their own new products that require varying levels of technical knowhow.

Coming into this year, conversations about the leaders in the public cloud centered on AWS and Microsoft, but by the end of 2017, Google managed to overtake Microsoft in that role, said Erik Peterson, co-founder and CEO of CloudZero, a Boston startup focused on cloud security and DevOps.

“They really did a good job this year of distinguishing the platform and trying to build next-generation architectures,” he said.

Azure may be the default choice for Windows, but Google’s push into cloud-native systems, AI and containers has planted a flag as the place to do something special for companies that don’t already have a relationship with AWS, Peterson said.

Descartes Labs, a geospatial analytics company in Los Alamos, N.M., jumped on Google Cloud Platform early on partly because of Google’s  activity with containers. Today, about 90% of its infrastructure is on GCP, said Tim Kelton, the company’s co-founder and cloud architect. He is pleased not only with how Google Container Engine manages its workloads and responds to new features in Kubernetes, but how other providers have followed Google’s lead.

“If I need workloads on all three clouds, there’s a way to federate that across those clouds in a fairly uniform way, and that’s something we never had with VMs,” Kelton said.

Kelton is also excited about Istio, an open source project led by Google, IBM and Lyft that sits on top of Kubernetes and creates a service mesh to connect, manage and secure microservices. The project looks to address issues around governance and telemetry, as well as things like rate limits, control flow and security between microservices.

“For us, that has been a huge part of the infrastructure that was missing that is now getting filled in,” he said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Azure feature updates in 2017 play catch up to AWS

Microsoft Azure already solidified its position as the second most popular public cloud, and critical additions in 2017 brought the Azure feature set closer to parity with AWS.

In some cases, Azure leapfrogged its competition. But a bevy of similar products bolstered the platform as a viable alternative to Amazon Web Services (AWS). Some Microsoft initiatives broadened the company’s database portfolio. Others lowered the barrier to entry for Azure, and pushed further into IoT and AI. And the long-awaited, on-premises machine, Azure Stack, seeks to tap surging interest to make private data centers obsolete.

Like all the major public cloud providers, Microsoft Azure doubled down on next-generation applications that rely on serverless computing and machine learning. Among the new products are Machine Learning Workbench, intended to improve productivity in developing and deploying AI applications, and Azure Event Grid, which helps route and filter events built in serverless architectures. Some important upgrades to Azure IoT Suite included managed services for analytics on data collected through connected devices, and Azure IoT Edge, which extends Azure functionality to connected devices.

Many of those Azure features are too advanced for most corporations that lack a team of data scientists. However, companies have begun to explore other services that rely on these underlying technologies in areas such as vision, language and speech recognition.

AvePoint, an independent software vendor in Jersey City, N.J., took note of the continued investment by Microsoft this past year in its Azure Cognitive Services, a turnkey set of tools to get better results from its applications.

“If you talk about business value that’s going to drive people to use the platform, it’s hard to find a more business-related need than helping people do things smartly,” said John Peluso, Microsoft regional director at AvePoint.

Microsoft also joined forces with AWS on Gluon, an open source, deep learning interface intended to simplify the use of machine learning models for developers. And the company added new machine types that incorporate GPUs for AI modeling.

Azure compute and storage get some love, too

Microsoft’s focus wasn’t solely on higher-level Azure services. In fact, the areas in which it caught up the most with AWS were in its core compute and storage capabilities.

The B-Series are the cheapest machines available on Azure and are designed for workloads that don’t always need great CPU performance, such as test and development or web servers. But more importantly, they provide an on-ramp to the platform for those who want to sample Azure services.

Another Azure feature addition was the M-Series machines that can support SAP workloads with up to 20 TBs of memory, a new bare-metal VM and the incorporation of Kubernetes into Azure’s container service.

“I don’t think anybody believes they are on par [with AWS] today, but they have momentum at scale and that’s important,” said Deepak Mohan, an analyst at IDC.

In storage, Managed Disks is a new Azure feature that handles storage resource provisioning as applications scale. Archive Storage provides a cheap option to house data as an alternative to Amazon Glacier, as well as a standard access model to manage data across all the storage tiers.

Reserved VM Instances emulate AWS’ popular Reserved Instances to provide significant cost-savings for advanced purchases and deeper discounts for customers that link the machines to their Windows Server licenses. Azure also added low-priority VMs– the equivalent to AWS Spot Instances — that can provide even further savings but should be limited to batch-type projects due to the fact that they can be pre-empted.

It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS.
Jason McKaysenior vice president and CTO, Logicworks

The addition of Azure Availability Zones was a crucial update for mission-critical workloads that need high availability. It brings greater fault tolerance to the platform through the ability to spread workloads across regions and achieve a guaranteed 99.99% uptime.

“It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS,” said Jason McKay, senior vice president and CTO at Logicworks, a cloud managed service provider in New York.

And that’s not a bad thing, because Microsoft has always been good at being a fast follower, McKay said. There’s a fair amount of parity in the service catalogs for Azure and AWS, though Azure’s design philosophy is a bit more tightly coupled between its services. That means potentially slightly less creativity, but more functionality out of the box compared to AWS, McKay said.

Databases and private data centers

Azure Database Migration Service has helped customers transition from their private data centers to Azure. Microsoft also added full compatibility between SQL Server and the fully managed Azure SQL database service.

Azure Cosmos DB, a fully managed NoSQL cloud database, may not see a wave of adoption any time soon, but has the potential to be an exciting new technology to manage databases on a global scale. And in Microsoft’s continued evolution to embrace open source technologies, the company added MySQL and PostgreSQL support to the Azure database lineup as well.

The company also improved management and monitoring, which incorporates tools from Microsoft’s acquisition of Cloudyn, as well as added security. Azure confidential computing encrypts data while in use, in addition to encryption options at rest and in transit, while Azure Policy added new governance capabilities to enforce corporate rules at scale.

Other important security upgrades include Azure App Service Isolated, which made it easier to install dedicated virtual networks in the platform-as-a-service layer. The Azure DDoS Protection service aims to protect against DDoS attacks, new capabilities put firewalls around data in Azure Storage, and end points within the Azure virtual network limit the exposure of data to the public internet to access various multi-tenant Azure services.

Azure Stack’s late arrival

Perhaps Microsoft’s biggest cloud product isn’t part of its public cloud. After two years of fanfare, Azure Stack finally went on sale in late 2017. It transfers many of the tools found on the Azure public cloud within private facilities, for customers that have higher regulatory demands or simply aren’t ready to vacate their data center.

“That’s a huge area of differentiation for Microsoft,” Mohan said. “Everybody wants true compatibility between services on premises and services in the cloud.”

Rather than build products that live on premises, AWS joined with VMware to build a bridge for customers that want their full VMware stack on AWS either for disaster recovery or extension of their data centers. Which approach will succeed depends on how protracted the shift to public cloud becomes — and a longer delay in that shift favors Azure Stack, Mohan said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

2017 cloud computing headlines show upside, hurdles for CIOs

CIOs are moving to the cloud in force. They’re also running into some thorny problems and want to learn how others are dealing with them. Just take a look at SearchCIO’s 2017 cloud computing headlines: One was about untangling the mess of cloud services organizations are now relying on; another was on pitching the economics of cloud to C-level executives. One focused on the common problems IT organizations face when deploying cloud, and another highlighted the undoing of cloud migration: moving from cloud back to solid ground.

These headlines called attention to red-hot trends such as serverless computing, an unexpected adjustment in market research outfit Gartner’s ranking of cloud providers and why cloud remains atop IT execs’ investment priorities.

They’re the 10 most-viewed cloud stories on the site this year, and they’re listed below. So without further ado — SearchCIO’s top 2017 cloud computing headlines.

10. ‘Cloud computing challenges today: Planning, process and people’

This video, filmed at Cloud Expo in New York in June, gets takes from three cloud experts on what trips people up in the cloud today. Ed Featherston, vice president and principal architect at Cloud Technology Partners, said companies don’t do enough planning. Sumit Sarkar, chief data evangelist at data integration vendor Progress Software, expressed concern that data is getting harder to access as companies plug more technologies into their cloud architectures. And Accenture managing consultant Greg Bledsoe said companies moving to the cloud need to work in wholly different ways: “Companies are still managing their cloud infrastructure as if it were physical infrastructure.”

9. ‘Function as a service, or serverless computing: Cloud’s next big act?’

Serverless computing takes away the laborious tasks of provisioning and managing servers, putting a new emphasis on code. “All you’re doing is writing your software code and then you’re packaging it and you’re letting someone else worry about whether the environment is ready for you,” said Kuldeep Chowhan, a principal engineer at Expedia, which is among a growing number of companies using the cloud service. Unlike other forms of cloud, users don’t have to spin up virtual machines. There are cautions, though. For example, it won’t work in every computing environment.

8. ‘Unclouding: How one company reversed the cloud migration process’

Go ahead: Move to the public cloud. If you don’t like it, come back. According to a study cited in this feature story on “unclouding,” or shifting from cloud infrastructure back to physical servers, 40% of organizations that have used the public cloud have moved some or all of their cloud deployments back in-house because of security, cost or manageability concerns. The story follows Nightingale Informatix Corp., a Canadian cloud medical records provider, which built and was testing a public cloud system. Then, a telecom acquired the company and decided to discontinue the public-cloud-based system. That’s when the unclouding began.

7. ‘Multi-cloud environments are everywhere, but managing them is just beginning’

Companies today have so many cloud services tied to so many departments — many without IT’s blessing — they don’t know what belongs to whom. Identifying and then connecting them so they work as though they were one computing system is the right idea, IT consultant Judith Hurwitz told SearchCIO, but “They really don’t know where to start.” In this feature, AstraZeneca’s CIO, Dave Smoley, and Matt Cadieux, CIO at Formula One racecar team Red Bull Racing, share their stories about the multi-cloud fast lane.

6. ‘Selling the value of cloud computing to the C-suite’

When corporate IT goes cloud, its economics will invariably and drastically change, Mark Tonsetic, formerly of consulting outfit CEB, wrote in this tip. IT leaders need to hash that out with CFOs and other C-level executives. Tonsetic listed five “imperatives for getting this conversation right,” including the long-term savings executives expect from a cloud move and what will be done with those savings; tradeoffs the organization may have to make between short-term migration costs and long-term efficiencies; and how public cloud use will affect how IT serves the business.

5. ‘IT Priorities 2017: Tech leaders remain invested in cloud options’

Each year, TechTarget conducts its IT Priorities survey to determine what is topmost in the minds of senior IT leaders. In 2017, it was cloud computing. In this report, SearchCIO relays results of the survey of 971 IT professionals on the technology endeavors their companies would undertake this year. Sixty-four percent said they would increase their budgets for cloud services. The reason, said Quality Consulting’s vice president of operations, Jim Hope, who SearchCIO interviewed for this story, is economics. “They are looking for cheap,” he said.

4. ‘AWS cloud platform will share cloud computing heights, CEO Jassy says’

Amazon Web Services, Amazon’s cloud business, may be the world’s top cloud platform provider, but it won’t be the only one, chief executive Andy Jassy said. In this news story, written at Gartner’s annual Symposium/ITxpo, Jassy told the gathered audience of IT leaders that companies will continue to invest in multiple cloud providers, choosing one main vendor for most of their data and applications and putting a small amount in other provider’s clouds. What they won’t do is divide their workloads evenly among vendors, he said. CIOs often start out planning to, but it’s hugely difficult, so “very few end up going that route.”

3. ‘OpenStack in the enterprise: Are you up for the challenge?’

OpenStack is a hit at big corporations. AT&T, Disney, Volkswagen, Walmart and PayPal all use the open source cloud system. And it’s no wonder. PayPal’s Jigar Desai, vice president of cloud and platforms, said OpenStack has given the company the public-cloud-like dexterity it needed to move fast and at scale. “OpenStack has been a fantastic journey for us,” Desai told SearchCIO in this feature story. But the technology requires a lot of upfront investment and skills to make it work, so smaller companies may not have the wherewithal. “Is it what you want?” asked Forrester Research analyst Lauren Nelson. “Is it the right choice for you?

2. ‘Amazon cloud outage: A CIO survivor’s guide’

In SearchCIO’s Searchlight news analysis column, industry observers offered advice for keeping the lights on even when cloud infrastructure has failed, as AWS did in February, taking down wide swaths of the internet. First, IT leaders need to keep calm, analysts said, and then evaluate their architecture and incident response. They should determine their tolerance for risk — and then decide whether to build applications that could withstand events such as outages. That’s a hard, costly thing to do. “Every additional nine of availability, so to speak, gets exponentially more expensive,” said Gartner analyst Lydia Leong.

1. ‘AWS, Azure tie for top spot in 2017 Gartner ranking’

Gartner releases rankings for major cloud providers every year, assessing them on the technical capabilities of their offerings, as well as things like management and support. AWS grabbing the top spot has been practically a given, with the real activity among the catchers-up. This year, there were two No. 1s: AWS and Microsoft Azure, declared analyst Elias Khnaser at Symposium/ITxpo. With companies now relying on AWS along with Azure and next on the list, Google, or other providers — making multi-cloud a reality for most CIOs — “We’re moving beyond which provider is best,” he said.

Free phone service could boost Dialpad’s UCaaS status

Unified-communications-as-a-service provider Dialpad has released a free version of its cloud business phone system for small organizations with up to five employees.

Subscribers to the service, Dialpad Free, will receive one free office phone number and up to five employees can be dialed by name or as extensions. The free phone service includes the most basic telephony features, except for E911.

The features of the Dialpad Free phone service include 100 outbound calling minutes per month, unlimited inbound calling minutes, 100 inbound and outbound text messages per month, call recording, voicemail and video calling between Dialpad users. The system also integrates with LinkedIn and Google G Suite.

“A lot of small tech startups are using Google’s G Suite for email, calendar and documents,” Nemertes Research analyst Irwin Lazar said. “Being able to use Dialpad for free, which tightly integrates into G Suite, should be attractive.”

While the Dialpad Free phone service won’t generate significant revenue for the provider, Lazar said the service could help boost Dialpad’s recognition in the competitive UCaaS market.

Organizations can download the Dialpad app onto a desktop, laptop, tablet or smartphone. There are free apps for Mac, Windows, iOS and Android. For a limited time, there will be no charge for transferring an existing phone line to the Dialpad Free service. However, there is a $3 fee for porting a number away from Dialpad Free.

Facebook partners push Workplace adoption

Talk Social to Me, a tech consulting firm, and ServiceRocket, a provider of Workplace by Facebook apps, have partnered to create an adoption program for Workplace by Facebook.

The partnership, called Elevate, will offer Workplace by Facebook support for enterprise customers with a regulated and deskless workforce. Elevate offers customers access to ServiceRocket’s Moderation and Insights apps, which provide data on how Workplace by Facebook is used in the enterprise, alongside Talk Social to Me’s consulting services.

The majority of the 30,000 organizations that have adopted Workplace by Facebook are in industries that employ deskless workers, such as healthcare, retail and manufacturing. These organizations tend to have complex working environments comprised of hourly and part-time workers.

“We know that business value is best achieved when companies concerned with HIPAA, employee unions and large hourly populations can discover and respond immediately to business and social conversations,” Talk Social to Me CEO Carrie Basham Young said.

CPaaS gains ground for embedded video

Communications platform as a service (CPaaS) is becoming the deployment model of choice for embedded video, according to a report from video conferencing vendor Vidyo, based in Hackensack, N.J. The report surveyed 166 developers in 48 countries and found more than half of developers have implemented some form of video chat.

Developers looking to embed video into enterprise apps have four deployment models to consider:

  • a full internal development where the majority of the video technology is developed in-house;
  • commercially available software that is integrated as part of the deployment;
  • open source software that is used as is or customized by the developer; and
  • CPaaS, which embeds video via an API platform.

According to the report, early adopters of embedded video prefer open source and CPaaS deployment models. CPaaS is growing in popularity as 78% of respondents plan to use CPaaS for embedded video, with nearly half planning to use CPaaS in the next 12 months.

The top considerations for deploying CPaaS include the support for various devices and operating systems, WebRTC support, high availability and the ability to sustain calls over unreliable networks.

Misconfigured Amazon S3 buckets expose sensitive data

The cloud has simplified accessing compute and storage resources, making life a lot easier for application developers, IT administrators and company employees. However, when end users fail to properly secure the cloud, it can put data at greater risk.

In the past year, cybersecurity firms have reported on a rash of misconfigured Amazon S3 buckets that have left terabytes of corporate and top-secret military data exposed on the internet. This misconfiguration allows anyone with an Amazon account access to the data simply by guessing the name of the Simple Storage Service (S3) bucket instance.

Storage and cybersecurity experts point to IT administrators and end users as the culprits. Users have the option of protecting each storage block with an access control list (ACL) to keep data private, share it for reading or share it for reading and writing. Experts claim data was left exposed because the ACLs were configured to allow any user with an Amazon Web Services (AWS) account to access the data. The Amazon S3 buckets were not reconfigured to restrict access.

“Maybe that is too much power for the end users,” said Chris Vickery, director of cyber-risk research at cybersecurity firm UpGuard, based in Mountain View, Calif. “You really can’t put the blame on Amazon. The buckets are secured by default. It’s madness by the end user.”

In November, UpGuard reported two incidents of sensitive data left exposed in Amazon S3 buckets belonging to the United States Army Intelligence and Security Command (INSCOM), as well as the U.S. Central Command (CENTCOM) and Pacific Command.

Nearly 100 GB of critical data belonging to INSCOM was found in unsecured cloud storage repositories, including information labeled “top secret” and “NOFORN,” which means no foreign nationals should be able to view the data. The largest unsecured file found was an Oracle Virtual Appliance that contained a virtual hard drive and Linux-based operating system likely used for receiving Defense Department data from a remote location. UpGuard found top-secret data was tied to the defunct defense contractor Invertix.

“Also exposed within [the S3 storage] are private keys used for accessing distributed intelligence systems belonging to Invertix,” according to an UpGuard report. “Plainly put, the digital tools needed to potentially access the networks relied upon by multiple Pentagon intelligence agencies to disseminate information should not be something available to anybody entering a URL into a web browser.

“Although the UpGuard cyber-risk team has found and helped to secure multiple data exposures involving sensitive defense intelligence data, this is the first time classified information has been among the exposed data,” the report stated.

The CENTCOM data exposure involved a Pentagon contractor who did intelligence work and left an archive of 1.8 million publicly accessible social media posts exposed in Amazon S3 buckets. The military characterized that data breach as “benign,” because it was data scraped from around the world identifying persons of interest by the military.

These incidents are part of a series in which high-profile companies left data in Amazon S3 buckets exposed because the ACLs were configured to allow any user with an Amazon account to gain access to the data. The companies caught up in the problem include telco giant Verizon, U.S. government contractor Booz Allen Hamilton, consulting firm Accenture, World Wrestling Entertainment and Dow Jones.

Storage and cybersecurity experts agree this is not Amazon’s fault. The AWS S3 buckets are designed with top-level security by default when the storage instances are created. The user has control over what level of access to assign each bucket.

“Have we given too much power to the end user? Yeah, but we also gave them keyboards,” said George Crump, founder of Storage Switzerland. “People have to learn. I guess it’s like the seat belt law. Enough people have to go through a windshield before they do something about it. Organizations have to monitor cloud assets the same way they monitor their data center assets.

Organizations have to monitor cloud assets the same way they monitor their data center assets.
George Crumpfounder, Storage Switzerland

“There are more than a few tools out there that monitor open buckets,” Crump added. “I hate to have Amazon blameless in this, but they are. It would be like blaming the car manufacturer because people are not using their seat belts.”

Earlier this month, Amazon added new S3 encryption and security features to help address the data breaches. These features include default encryption that mandates all objects in a bucket must be stored in an encrypted form.

Amazon also added permission checks that display a prominent indicator next to each Amazon S3 bucket that is publicly accessible. Cross-region replication with a Key Management Service enables objects that are encrypted with keys to be replicated, and a detailed inventory report includes the encryption status of each object.

David Monahan, managing director for security and risk management at Enterprise Management Associates in Boulder, Colo., said consumers who are using cloud services need to ask more questions about where their data is being stored and get more details on how it is being protected.

“This is a data-owner issue,” he said. “Some owners are relying on the names of the bucket being private. That is insufficient. Others are creating permissions and then not following the rule of least privilege and making the data too open. To them, I say, ‘Stop being lazy.’ Others may not understand how the system access controls work. They have to learn before putting real data out there.”