Analysts reported this month that the global PC market did something in 2019 it had not accomplished in seven years: It grew.
The figures differ as to how much — IDC reported a 2.7% year-over-year growth in global shipments, while Gartner cited a 0.6% increase — but experts agree that the Windows 7 sunset helped to prompt a hardware refresh for the enterprise. Per Gartner, Lenovo, HP and Dell shipped the most PCs in 2019, seeing growth of 8%, 3% and 5%, respectively.
Whether the boost in growth will be a one-year blip is debatable, but there is consensus that, for the enterprise at least, the PC is here to stay.
Windows 7 sunset gives PCs a boost
Linn Huang, research vice president at IDC, attributed the increase to a confluence of factors. Companies found themselves in a unique position of having to migrate to a new OS amid the growing tensions of a trade war with China, where PC components are commonly manufactured.
Huang also mentioned shortages and tariff issues may have affected the market as well. Intel faced CPU supply issues that eased during the course of 2019 and, in December, President Trump tweeted that “penalty tariffs” would “not be charged,” thanks to a new agreement with China.
Mikako Kitagawa, senior principal analyst at Gartner, said the shipment boost was not because of any renewed interest in using the PC, but almost solely because of the Windows 7 sunset, which occurred Jan. 14.
Forrester Research analyst Andrew Hewitt acknowledged the effect of the Windows 7 sunset, but said it was only part of the story.
“I also believe that the PC is becoming more important as organizations try to improve employee experience,” he said. “We know from research that if people can’t make progress every day at work, they’re vulnerable to burnout and can contribute to higher attrition. The PC sits at the heart of productivity, so organizations see it as an important driver of [employee experience].”
Yev Pusin, director of strategy at data storage firm Backblaze, said the business’ clients — especially on the enterprise side — indeed had a need for something that could contribute more to productivity than a smartphone or tablet.
“I think a lot more folks … realized that, for the multi-tasking and flexibility they want, they need an actual computer — a Mac or PC,” he said.
Will PC market growth continue?
Kitagawa expects to see shipments dip in 2020 and 2021 due to a weak consumer market, as the smartphone has largely subsumed the PC’s role in daily life. Smartphones have made inroads in the enterprise as well, especially among younger workers.
“People used to carry a laptop or tablet to do work. Now, smartphone screens are bigger, so they are able to handle some tasks as well,” she said. “On the mentality side, many young people feel their smartphone is their primary work device.”
This is not to say that the PC will be disappearing from the workspace anytime soon.
“It’s not the case that the PC is going away,” Kitagawa said. “The PC is a very important business tool.”
Huang likewise expected a decline of PC sales in the next couple of years but said a shift in the market might accompany that trend.
“Consumers and commercial users alike are demanding better and better with each generation,” he said. “Consequently, we expect to ship fewer PCs [in] 2020 and beyond, but the market will continue to churn toward more premium ends.”
Pusin said he did see a continued appetite for PCs in the future but agreed that customers interested in buying computers might focus on the higher end of performance.
According to Hewitt, the PC will retain its central place in the business world, although the form factor may differ.
“Our research actually shows that 30% of the most important factors for improving employee experience are technology-related, and the PC is a big part of that,” he said.
I will be in the market for MacBook 12 inch next month. Cheap as possible please not fuss about condition as long as it works Anyone thinking about selling soon ? Happy to collected within reason as well.
I will be in the market for MacBook 12 inch next month. Cheap as possible please not fuss about condition as long as it works Anyone thinking about selling soon ? Happy to collected within reason as well.
Amazon Web Services has a stranglehold on the public cloud market, but the company’s dominance in cloud security is facing new challenges.
The world’s largest cloud provider earned a reputation over the last 10 years as an influential leader in IaaS security, thanks to introducing products such as AWS Identity & Access Management and Key Management Service in the earlier part of the decade to more recent developments in event-driven security. AWS security features helped the cloud service provider establish its powerful market position; according to Gartner, AWS in 2018 earned an estimated $15.5 billion in revenue for nearly 48% of the worldwide public IaaS market.
But at the re:Invent 2019 conference last month, many of the new security tools and features announced were designed to fix existing issues, such as misconfigurations and data exposures, rather than push AWS security to new heights. “There wasn’t much at re:Invent that I’d call security,” said Colin Percival, founder of open source backup service Tarsnap and an AWS Community Hero, via email. “Most of what people are talking about as security improvements address what I’d call misconfiguration risk.”
Meanwhile, Microsoft has not only increased its cloud market share but also invested heavily in new Azure security features that some believe rival AWS’ offerings. Rich Mogull, president and analyst at Securosis, said there are two sides to AWS security — the inherent security of the platform’s architecture, and the additional tools and products AWS provides to customers.
“In terms of the inherent security of the platform, I still think Amazon is very far ahead,” he said, citing AWS’ strengths such as availability zones, segregation, and granular identity and access management. “Microsoft has done a lot with Azure, but Amazon still has a multi-year lead. But when it comes to security products, it’s more of a mixed bag.”
Colin PercivalFounder, Tarsnap
Microsoft has been able to close the gap in recent years with the introduction of its own set of products and tools that compete with AWS security offerings, he said. “Azure Security Center and AWS Security Hub are pretty comparable, and both have strengths and weaknesses,” Mogull said. “Azure Sentinel is quite interesting and seems more complete than AWS Detective.”
New tools, old problems
Arguably the biggest AWS security development at re:Invent was a new tool designed to fix a persistent problem for the cloud provider: accidental S3 bucket exposures. The IAM Access Analyzer, which is part of AWS’ Identity and Access Management (IAM) console, alerts users when an S3 bucket is possibly misconfigured to allow public access via the internet and lets them block such access with one click.
AWS had previously made smaller moves, including changes to S3 security settings and interfaces, to curb the spate of high-profile and embarrassing S3 exposures in recent years. IAM Access Analyzer is arguably the strongest move yet to resolve the ongoing problem.
“They created the S3 exposure issue, but they also fixed it,” said Jerry Gamblin, principal security engineer at vulnerability management vendor Kenna Security, which is an AWS customer. “I think they’ve really stepped up in that regard.”
Still, some AWS experts feel the tool doesn’t fully resolve the problem. “Tools like IAM Access Analyzer will definitely help some people,” Percival said, “but there’s a big difference between warning people that they screwed up and allowing people to make systems more secure than they could previously.”
Scott Piper, an AWS security consultant and founder of Summit Route in Salt Lake City, said “It’s yet another tool in the toolbelt and it’s free, but it’s not enabled by default.”
There are other issues with IAM Access Analyzer. “With this additional information, you have to get that to the customer in some way,” Piper said. “And doing that can be awkward and difficult with this service and others in AWS like GuardDuty, because it doesn’t make cross-region communication very easy.”
For example, EC2 regions are isolated to ensure the highest possible fault tolerance and stability for customers. But Piper said the isolation presents challenges for customers using multiple regions because it’s difficult to aggregate GuardDuty alerts to a single source, which requires security teams to analyze “multiple panes of glass instead of one.”
AWS recently addressed another security issue that became a high-profile concern for enterprises following the Capital One breach last summer. The attacker in that exploited an SSRF vulnerability to access the AWS metadata service for company’s EC2 instances, which allowed them to obtain credentials contained in the service.
The Capital One breach led to criticism from security experts as well as lawmakers such as Sen. Ron Wyden (D-Ore.), who questioned why AWS hadn’t addressed SSRF vulnerabilities for its metadata service. The lack of security around the metadata service has concerned some AWS experts for years; in 2016, Percival penned a blog post titled “EC2’s most dangerous feature.”
“I think the biggest problem Amazon has had in recent years — judging by the customers affected — is the lack of security around their instance metadata service,” Percival told SearchSecurity.
In November, AWS made several updates to the metadata service to prevent unauthorized access, including the option to turn off access to the service altogether. Mogull said the metadata service update was crucial because it improved security around AWS account credentials.
But like other AWS security features, the metadata service changes are not enabled by default. Percival said enabling the update by default would’ve caused issues for enterprise applications and services that rely on the existing version of the service. “Amazon was absolutely right in making their changes opt-in since if they had done otherwise, they would have broken all of the existing code that uses the service,” he said. “I imagine that once more or less everyone’s code has been updated, they’ll switch this from opt-in to opt-out — but it will take years before we get to that point.”
Percival also said the update is “incomplete” because it addresses common misconfigurations but not software bugs. (Percival is working on an open source tool that he says will provide “a far more comprehensive fix to this problem,” which he hopes to release later this month.)
Still, Piper said the metadata service update is an important step for AWS security because it showed the cloud provider was willing to acknowledge there was a problem with the existing service. That willingness and responsiveness hasn’t always been there in the past, he said.
“AWS has historically had the philosophy of providing tools to customers, and it’s kind of up to customers to use them and if they shoot themselves in the foot, then it’s the customers’ fault,” Piper said. “I think AWS is starting to improve and change that philosophy to help customers more.”
AWS security’s road ahead
While the metadata service update and IAM Access Analyzer addressed lingering security issues, experts highlighted other new developments that could strengthen AWS’ position in cloud security.
AWS Nitro Enclaves, for example, is a new EC2 capability introduced at re:Invent 2019 that allows customers to create isolated instances for sensitive data. The Nitro Enclaves, which will be available in preview this year, are virtual machines attached to EC2 instances but have CPU and memory isolation from the instances and can be accessed only through secure local connections.
“Nitro Enclaves will have a big impact for customers because of its isolation and compartmentalization capabilities” which will give enterprises’ sensitive data an additional layer of protection against potential breaches, Mogull said.
Percival agreed that Nitro Enclaves could possibly “raise the ceiling,” for AWS Security, though he cautioned against using them. “Enclaves are famously difficult for people to use correctly, so it’s hard to predict whether they will make a big difference or end up being another of the many ‘Amazon also has this feature, which nobody ever uses’ footnotes.”
Experts also said AWS’ move to strengthen its ARM-based processor business could have major security implications. The cloud provider announced at re:Invent 2019 that it will be launching EC2 instances that run on its new, customized ARM chips, dubbed Graviton2.
Gamblin said the Graviton2 processors are a security play in part because of recent microprocessor vulnerabilities and side channel attacks like Meltdown and Spectre. While some ARM chips were affected by both Meltdown and Spectre, subsequent side channel attacks and Spectre variants have largely affected x86 processors.
“Amazon doesn’t want to rely on other chips that may be vulnerable to side channel attacks and may have to be taken offline and rebooted or suffer performance issues because of mitigations,” Gamblin said.
Percival said he was excited by the possibility of the cloud provider participating in ARM’s work on the “Digital Security by Design” initiative, a private-sector partnership with the UK that is focused in part on fundamentally restructuring — and improving — processor security. The results of that project will be years down the road, Percival said, but it would show a commitment from AWS to once again raising the bar for security.
“If it works out — and it’s a decade-long project, which is inherently experimental in nature — it could be the biggest step forward for computer security in a generation.”
Although the market has shifted and more vendors are providing cloud-based monitoring, there are still a wide range of feature-rich server monitoring tools for organizations that must keep their workloads on site for security and compliance reasons.
Here we examine open source and commercial on-premises server monitoring tools from eight vendors. Although these products broadly achieve the same IT goals, they differ in their approach, complexity of setup — including the ongoing aspects of maintenance and licensing — and cost.
Cacti is an open source network monitoring and graphing front-end application for RRDtool, an industry-standard open source data logging tool. RRDtool is the data collection portion of the product, while Cacti handles network graphing for the data that’s collected. Since both Cacti and RRDtool are open source, they may be practical options for organizations that are on a budget. Cacti support is community-driven.
Cacti can be ideal for organizations that already have RRDtool in place and want to expand on what it can display graphically. For organizations that don’t have RRDtool installed, or aren’t familiar with Linux commands or tools, both Cacti and RRDtool could be a bit of a challenge to install, as they don’t include a simple wizard or agents. This should be familiar territory for Linux administrators, but may require additional effort for Windows admins. Note that Cacti is a graphing product and isn’t really an alerting or remediation product.
ManageEngine Applications Manager
The ManageEngine system is part of an extensive line of server monitoring tools that include application-specific tools as well as cloud and mobile device management. The application monitoring framework enables organizations to purchase agents from various vendors, such as Oracle and SAP, as well as customer application-specific tools. These server monitoring tools enable admins to perform cradle-to-grave monitoring, which can help them troubleshoot and resolve application server issues before they impact end-user performance. ManageEngine platform strengths include its licensing model and the large number of agents available. Although the monitoring license per device is all-inclusive for interfaces or sensors needed per device, the agents are sold individually.
Thirty-day trials are available for many of the more than 100 agents. Licensing costs range from less than $1,000 for 25 monitors and one user to more than $7,000 for 250 monitors with one user and an additional $245 per user. Support costs are often rolled into the cost of the monitors. This can be ideal for organizations that want to make a smaller initial investment and grow over time.
Microsoft System Center Operations Manager
The product monitors servers, enterprise infrastructure and applications, such as Exchange and SQL, and works with both Windows and Linux clients. Microsoft System Center features include configuration management, orchestration, VM management and data protection. System Center isn’t as expansive on third-party applications as it is with native Microsoft applications. System Center is based on core licensing to match Server 2016 and later licensing models.
The base price for Microsoft System Center Operations Manager starts at $3,600, assuming two CPUs and 16 cores total and can be expanded with core pack licenses. With Microsoft licensing, the larger the environment in terms of CPU cores, the more a customer site can expect to pay. While Microsoft offers a 180-day trial of System Center, this version is designed for the larger Hyper-V environments. Support is dependent on the contract the organization selects.
Nagios Core is free open source software that provides metrics to monitor server and network performance. Nagios can help organizations provide increased server, services, process and application availability. While Nagios Core comes with a graphical front end, the scope of what it can monitor is somewhat limited. But admins can deploy additional community-provided front ends that offer more views and additional functionality. Nagios Core natively installs and operates on Linux systems and Unix variants.
For additional features and functionality, the commercial Nagios XI product offers true dashboards, reporting, GUI configuration and enhanced notifications. Pricing for this commercial version ranges from less than $7,000 for 500 nodes and an additional $1,500 per enterprise for reporting and capacity planning tools. In addition to agents for OSes, users can also add network monitoring for a single point of service. Free 60-day trials and community support are available for the products that work with the free Nagios Core download.
Opsview system monitoring software includes on-premises agents as well as agents from all the major cloud vendors. While the free version provides 25 hosts to monitor, the product’s main benefit is that it can support both SMBs and the enterprise. Pricing for a comprehensive offering that includes 300 hosts, reporting, multiple collectors and network analyzer is less than $20,000 a year, depending on the agents selected.
Enterprise packages are available via custom quote. The vendor offers both on-premises and cloud variations. The list of agents Opsview can monitor is one of the most expansive of any of the products, bridging cloud, application, web and infrastructure. Opsview also offers a dedicated mobile application. Support for most packages is 24/7 and includes customer portals and a knowledgebase.
Paessler PRTG Network Manager
PRTG can monitor from the infrastructure to the application stack. The licensing model for PRTG Network Monitor follows a sensor model format over a node, core or host model. This means a traditional host might have more than 20 sensors monitoring anything from CPU to bandwidth. Services range from networking and bandwidth monitoring to other more application-specific services such as low Microsoft OneDrive or Dropbox drive space. A fully functional 30-day demo is available and pricing ranges from less than $6,000 for 2,500 sensors to less than $15,000 for an unlimited number of sensors. Support is email-based.
SolarWinds Server and Application Monitor
SolarWinds offers more than 1,000 monitoring templates for various applications and systems, such as Active Directory, as well as several virtualization platforms and cloud-based applications. It also provides dedicated virtualization, networking, databases and security monitoring products. In addition to standard performance metrics, SolarWinds provides application response templates to help admins with troubleshooting. A free 30-day trial is available. Pricing for 500 nodes is $73,995 and includes a year of maintenance.
This free, open source, enterprise-scale monitoring product includes an impressive number of agents that an admin can download. Although most features aren’t point and click, the dashboards are similar to other open source platforms and are more than adequate. Given the free cost of entry and the sheer number of agents, this could be an ideal product for organizations that have the time and Linux experience to bring it online. Support is community-based and additional support can be purchased from a reseller.
The bottom line on server monitoring tools
The products examined here differ slightly in size, scope and licensing model. Outside of the open source products, many commercial server monitoring tools are licensed by node or agent type. It’s important that IT buyers understand all the possible options when getting quotes, as they can be difficult to understand.
Pricing varies widely, as do the features of the dashboards of the various server monitoring tools. Ensure the staff is comfortable with the dashboard and alerting functionality of each system as well as mobile ability and notifications. If an organization chooses an open source platform, keep in mind that the installation could require more effort if the staff isn’t Linux savvy.
The dashboards for the open source monitors typically aren’t as graphical as the paid products, but that’s part of the tradeoff with open source. Many of the commercial products are cloud-ready or have that ability, so even if an organization doesn’t plan to monitor its servers in the cloud today, they can take advantage of this technology in the future.
AWS is the undisputed leader in the cloud market. As for AI, the cloud division of tech giant Amazon is also in a dominant position.
“Machine learning is at a place now where it is accessible enough that you don’t need Ph.Ds,” said Joel Minnick, head of product marketing for AI, machine learning and deep learning at AWS.
Partly, that’s due to a natural evolution of the technology, but vendors such as Google, AWS, IBM, DataRobot and others have made strides in making the process of creating and deploying machine learning and deep learning easier.
Over the last few years, AWS has invested heavily in making it easier for developers and engineers to create and deploy AI models, Minnick said, speaking with TechTarget at the AWS re:Invent 2019 user conference in Las Vegas in December 2019.
AWS’ efforts to simplify the machine leaning lifecycle were on full display at re:Invent. During the opening keynote, led by AWS CEO Andy Jassy, AWS revealed new products and updates for Amazon SageMaker, AWS’ full-service suite of machine learning development, deployment and governance products.
Those products and updates included new and enhanced tools for creating and managing notebooks, automatically making machine learning models, debugging models and monitoring models.
SageMaker Autopilot, a new AutoML product, in particular, presents an accessible way for users who are new to machine learning to create and deploy models, according to Minnick.
In general, SageMaker is one of AWS’ most important products, according to a blog-post-styled report on re:Invent from Nick McQuire, vice president of enterprise research at CCS Insight. The report noted that AWS, due largely to SageMaker, its machine learning-focused cloud services, and a range of edge and robotics products, is a clear leader in the AI space.
“Few companies (if any) are outpacing AWS in machine learning in 2019,” McQuire wrote, noting that SageMaker alone received 150 updates since the start of 2018.
Developers for AWS AI
In addition to the SageMaker updates, AWS in December unveiled another new product in its Deep series: DeepComposer.
The product series, which also includes DeepLens and DeepRacer, is aimed at giving machine learning and deep learning newcomers a simplified and visual means to create specialized models.
Introduced in late 2017, DeepLens is a camera that enables users to run deep learning models on it locally. The camera, which is fully programmable with AWS Lambda, comes with tutorials and sample projects to help new users. It integrates with a range of AWS products and services, including SageMaker and its Amazon Rekognitionimage analysis service.
“[DeepLens] was a big hit,” said Mike Miller, director of AWS AI Devices at AWS.
DeepRacer, revealed the following year, enables users to apply machine learning models to radio controlled (RC) model cars and make them autonomously race along tracks. Users can build models in SageMaker and bring them into a simulated racetrack, where they can train the models before bringing them into a 1/18th scale race car.
An AWS racing league makes DeepRacer competitive, with AWS holding yearlong tournaments comprised of multiple races. DeepRacer, Miller declared, has been exceedingly successful.
“Tons of customers around the world have been using DeepRacer to engage and upskill their employees,” Miller said.
Dave Anderson, director of technology at Liberty Information Technology, the IT arm of Liberty Mutual, said many people on his team take part in the DeepRacer tournaments.
“It’s a really fun way to learn machine learning,” Anderson said in an interview. “It’s good fun.”
Composing with AI
Meanwhile, DeepComposer as the name suggests, helps train users on machine learning and deep learning through music. The product comes with a small keyboard that can plug into a PC along with a set of pretrained music genre models. The keyboard itself isn’t unusual, but by using the models and accompanying software, users automatically create and tweak fairly basic pieces of music within a few genres.
With DeepComposer, along with DeepLens and Deep Racer, “developers of any skill level can find a perch,” Miller said.
“For the last 20 years, Amazon has been investing in machine learning,” Miller said. “Our goal is to bring those same AI and machine learning techniques to developers of all types.”
The Deep products are just “the tip of the spear for aspiring machine learning developers,” Miller said. Amazon’s other products, such as SageMaker, extend that machine learning technology development strategy.
“We’re super excited to get more machine learning into the hands of more developers,” Miller said.
With the vast number of security products on the market and the growing amount of security data generated, enterprises face an uphill battle.
Siemplify, a startup based in New York, is aiming to make that hill easier to climb with its security operations platform, which the company hopes will be a Salesforce-like hub for security professionals. Siemplify’s platform is designed to tie various third-party products together and streamline the data for enterprises.
Nimmy Reichenberg, chief strategy officer at Siemplify, explained the company’s mission to provide an all-in-one spot for SOC teams to get their work done, as well as the relationship between SOAR and SIEM and why security product integration is becoming harder to accomplish.
Editor’s note:This interview has been edited for length and clarity.
Tell me the story of how Siemplify was founded.
Nimmy Reichenberg: Siemplify was started by three people: Amos Stern, Alon Cohen and Garry Fatakhov. Basically, all of them have security operations experience from the Israeli Defense Force. All three of them went to work for a government defense contractor, and what they did is train SOCs all over the world, so they trained dozens and dozens of both civilian and security operations teams on how to better deal with cyberthreats. Through this work, it became very clear to them that the way that security operations teams work is highly flawed. There are so many things that can be improved about how these teams work, and they had this idea: why don’t we build this product and start a company that will solve what we’re seeing from training security operations teams around the world? And they founded Siemplify.
What does Siemplify do?
Reichenberg: What we essentially provide is security operations platform. The easiest way to describe our vision is that just like how Salesforce is a platform that sales professionals work on or Workday is what human resources professionals use to get their work done, Siemplify is the platform where security operations teams log on in the morning and get their work done. We provide a security operations platform. A big component of what we provide goes by SOAR, security orchestration automation and response, and that functionality basically has to do with building repeatable processes and integrating the various tools security teams use to investigate threats and remediate threats using as much automation as possible. We know that there’s a huge shortage in security professionals these days so obviously there’s a lot of appetite in automating anything that can be done.
Do you thinkSOARis makingSIEMtech obsolete or is SIEM tech being integrated into SOAR?
Reichenberg: SOAR is definitely a complementary solution to SIEM. SIEMs definitely have a place when it comes to storing all your logs, doing that initial analysis and correlation and firing off an alert to an analyst. That’s kind of what SIEMs do and that’s not going away. We could talk about next-gen SIEMs or there’s all these newer technologies but essentially that is what they do. SOAR tools take that alert and apply a process to it — encase it into case management, decide a playbook that walks the analyst through the steps of what actually needs to be done once that alert is fired, automate that, and provide machine learning.
Do you think it’s easier to integrate with other vendors’ security products today than it was five years ago?
Reichenberg: I would say the answer to that is no. One of the things that SOAR solutions do is act as a security fabric that connects all your tools, but the reason why it’s harder to integrate tools is that there’s just so many of them out there. The number of security tools out there is only growing. Nothing is going away, and everyone is still using the antivirus tools from 50 years ago only now there’s 50 products on top of that. Ten years ago, the average company maybe used a dozen or two dozen security tools. Now it’s pretty common to find companies that use 50, 60 or 90 different security tools throughout the company. So integrating tools is harder [today], and the reason is if I’m a new company and I built this new security tool and it’s great, do I really now want to invest the time and effort to make it agree with 500 other security tools? And the answer is I’m probably not going to do that. Our approach is we don’t detect anything bad; that’s a type of tool we integrate into our platform. Our job is to be that connecting tissue between all the different tools. We have over 200 integrations of tools already built into our platform, so we have well-connecting tissue, if you will, and apply a process of how all these tools actually work and apply a playbook that addresses each specific scenario in cybersecurity.
What do the next 12 months look like for the company?
Reichenberg: The category is exploding rapidly. The key thing for the next 12 months is scale. We have to scale everything about the company. Scale our processes, scale our go-to-market, et cetera. From a product perspective, what we’re working on is making the product easier to use in the market, and that’s kind of our differentiator — make it easy to address a wide variety of use cases.
Reichenberg: We’re going to do a pretty horizontal use of the money because we need to scale everything. Maybe a little more towards go-to-market — sales, marketing, customer success — because we’re adding a lot of customers, and the rest to R&D so it’s pretty horizontal.
For Google, the unified communications market is a means to an end: keeping G Suite competitive with Microsoft’s Office 365. In 2020, Google plans to close in on the Microsoft suite’s core communication features by migrating businesses to Hangouts Chat, the messaging complement to G Suite’s calling and video conferencing apps.
In mid-2020, Hangouts Chat will replace an older, more basic chat app called Hangouts. While the new app is an improvement, Google will have to add features and build a much larger partner ecosystem to reach par with Office 365.
What’s more, Google’s strategy of maintaining separate products for core communications services is at odds with the direction of the market. Vendors like Microsoft have consolidated calling, messaging and meetings services into a single user interface. But Google is keeping Hangouts Chat distinct from the video conferencing app Hangouts Meet.
“Their challenges are more related to fundamentally who they are,” TJ Keitt, an analyst at Forrester Research, said. “They’re a company that, for a while, had struggled to indicate they understand all the things that large enterprises require.”
G Suite has trailed Office 365 for years. In particular, Google has struggled to appeal to organizations with thousands and tens of thousands of employees. Those customers often require complex feature sets, but Google likes to keep things simple.
“It’s really important for us to provide just really simple, delightful experiences that work,” Smita Hashim, manager of G Suite’s communications apps, said in December. “It’s not like we need every bell and whistle and every feature.”
In 2019, Google tackled low-hanging fruit that had been standing in the way of selling G Suite to customers with thousands of employees. Giving customers some control over where their data is stored was a significant change. Also, adding numerous IT controls and security backstops was critical to enterprises.
But Google does not appear interested in matching Office 365 feature-for-feature. Instead, analysts expect the company will seek to grow G Suite in 2020 and beyond by focusing on specific industries and kinds of companies.
“If Google plays the long game, they don’t need to really worry about whether or not they are beating Microsoft in a lot of the companies that are here right now,” Keitt said. Instead, Google can target new and adolescent companies that haven’t bought into Office 365.
Google’s targets will likely include the verticals of education and technology, as well as fast-growing businesses with a young workforce. The company has already won some big names. In 2019, G Suite added tech company Iron Mountain, with 26,000 employees, and Whirlpool, with 92,000 employees.
In 2020, Google needs to decide whether to get serious about building a communications portfolio on par with Microsoft’s. That would entail expanding the business calling service it launched this year, Google Voice for G Suite.
So far, the vendor has signaled it will keep the calling service simple. Whereas traditional telephony systems offer upwards of 200 features, Google opted for fewer than 20. The new year will likely bring only incremental changes, such as the certification of more desk phones.
“I think, incrementally, they are continuing to improve. They are trying to close the gap,” said Irwin Lazar, an analyst at Nemertes Research. “What I haven’t seen Google really try to do is leapfrog the market.”
Nevertheless, the cloud productivity market is likely still a lucrative one for Google. As of February, 5 million organizations subscribed to G Suite, some paying as much as $25 per user, per month.
Google Cloud, a division that includes G Suite as well as the vendor’s infrastructure-as-a-service platform, was on track to generate $8 billion in annual revenue as of July.
“Being number two in a multi-billion-dollar [office productivity] market is fine,” said Jeffrey Mann, an analyst at Garter.
If the dawn of cloud computing can be pegged to AWS’ 2006 launch of EC2, then the market has entered its gangly teenage years as the new decade looms.
While the metaphor isn’t perfect, some direct parallels can be seen in the past year’s cloud trends.
For one, there’s the question of identity. In 2019, public cloud providers extended services back into customers’ on-premises environments and developed services meant to accommodate legacy workloads, rather than emphasize transformation.
Maturity remains a hurdle for the cloud computing market, particularly in the area of cost management and optimization. Some progress occurred on this front in 2019, but there’s much more work to be done by both vendors and enterprises.
Experimentation was another hallmark of 2019 cloud computing trends, with the continued move toward containerized workloads and serverless computing. Here’s a look back at some of these cloud trends, as well as a peek ahead at what’s to come in 2020.
Hybrid cloud evolves
Hybrid cloud has been one of the more prominent cloud trends for a few years, but 2019 saw key changes in how it is marketed and sold.
Companies such as Dell EMC, Hewlett Packard Enterprise and, to a lesser extent, IBM have scuttled or scaled back their public cloud efforts and shifted to cloud services and hardware sales. This trend has roots prior to 2019, but the changes took greater hold this year.
Today, “there’s a battle between the cloud-haves and cloud have-nots,” said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.
Google, as the third-place competitor in public cloud, needs to attract more workloads. Its Anthos platform for hybrid and multi-cloud container orchestration projects openness but still ties customers into a proprietary system.
In November, Microsoft introduced Azure Arc, which extends Azure management tools to on-premises and cloud platforms beyond Azure, although the latter functionality is limited for now.
Earlier this month, AWS made the long-expected general availability of Outposts, a managed service that puts AWS-built server racks loaded with AWS software inside customer data centers to address issues such as low-latency and data residency requirements.
It’s similar in ways to Azure Stack, which Microsoft launched in 2017, but one key difference is that partners supply Azure Stack hardware. In contrast, Outposts has made AWS a hardware vendor and thus a threat to Dell/EMC, HPE and others who are after customers’ remaining on-premises IT budgets, Mueller said.
But AWS needs to prove itself capable of managing infrastructure inside customer data centers, with which those rivals have plenty of experience.
Looking ahead to 2020, one big question is whether AWS will join its smaller rivals by embracing multi-cloud. Based on the paucity of mentions of that term at re:Invent this year, and the walled-garden approach embodied by Outposts, the odds don’t look favorable.
Bare-metal options grow
Thirteen years ago, AWS launched its Elastic Compute Cloud (EC2) service with a straightforward proposition: Customers could buy VM-based compute capacity on demand. That remains a core offering of EC2 and its rivals, although the number of instance types has grown exponentially.
More recently, bare-metal instances have come into vogue. Bare-metal strips out the virtualization layer, giving customers direct access to the underlying hardware. It’s a useful option for workloads that can’t suffer the performance hit VMs carry and avoids the “noisy neighbor” problem that crops up with shared infrastructure.
Google rolled out managed bare-metal instances in November, following AWS, Microsoft, IBM and Oracle. Smaller providers such as CenturyLink and Packet also offer bare-metal instances. The segment overall is poised for significant growth, reaching more than $26 billion by 2025, according to one estimate.
Multiple factors will drive this growth, according to Deepak Mohan, an analyst with IDC.
Two of the biggest influences in IaaS today are enterprise workload movement into public cloud environments and cloud expansions into customers’ on-premises data centers, evidenced by Outposts, Azure Arc and the like, Mohan said.
The first trend has compelled cloud providers to support more traditional enterprise workloads, such as applications that don’t take well to virtualization or that are difficult to refactor for the cloud. Bare metal gets around this issue.
“As enterprise adoption expands, we expect bare metal to play an increasingly critical role as the primary landing zone for enterprise workloads as they transition into cloud,” Mohan said.
Cloud cost management gains focus
The year saw a wealth of activity around controlling cloud costs, whether through native tools or third-party applications. Among the more notable moves was Microsoft’s extension of Azure Cost Management to AWS, with support for Google Cloud expected next year.
But the standout development was AWS’ November launch of Savings Plans, which was seen as a vast improvement over its longstanding Reserved Instances offering.
Reserved Instances give big discounts to companies that are willing to make upfront spending commitments but have been criticized for inflexibility and a complex set of options.
“Savings Plans have massively reduced the complexity in gaining such discounts, by allowing companies to make commitments to AWS without having to be too prescriptive on the application’s specific requirements,” said Owen Rogers, who heads the digital economics unit at 451 Research. “We think this will appeal to enterprises and will eventually replace reserved instances as AWS’ de facto committed pricing model.”
The new year will see enterprises increasingly seek to optimize their costs, not just manage and report on them, and Savings Plans fit into this expectation, Rogers added.
If your enterprise hasn’t gotten serious about cloud cost management, doing so would be a good New Year’s resolution. There’s a general prescription for success in doing so, according to Corey Quinn, cloud economist at the Duckbill Group.
“Understand the goals you’re going after,” Quinn said. “What are the drivers behind your business?” Break down cloud bills into what they mean on a division, department and team-level basis. It’s also wise to start with the big numbers, Quinn said. “You need to understand that line item that makes up 40% of your bill.”
While some companies try to make cloud cost savings the job of many people across finance and IT, in most cases the responsibility shouldn’t fall on engineers, Quinn added. “You want engineers to focus on whether they can build a thing, and then cost-optimize it,” he said.
Serverless vs. containers debate mounts
One topic that could come with more frequency in 2020 is the debate over the relative merits of serverless computing versus containers.
Serverless advocates such as Tim Wagner, inventor of AWS Lambda, contend that a movement is underfoot.
At re:Invent, the serverless features AWS launched were not “coolness for the already-drank-the-Kool-Aid crowd,” Wagner said in a recent Medium post. “This time, AWS is trying hard to win container users over to serverless. It’s the dawn of a new ‘hybrid’ era.”
Another serverless expert hailed Wagner’s stance.
“I think the container trend, at its most mature state, will resemble the serverless world in all but execution duration,” said Ryan Marsh, a DevOps trainer with TheStack.io in Houston.
Ryan MarshDevOps trainer, TheStack.io
The containers vs. serverless debate has raged for at least a couple of years, and the notion that neither approach can effectively answer every problem persists. But observers such as Wagner and Marsh believe that advances in serverless tooling will shift the discussion.
AWS Fargate for EKS (Elastic Kubernetes Service) became available at re:Invent. The offering provides a serverless framework that launches, scales and manages Kubernetes container clusters on AWS. Earlier this year, Google released a similar service called Cloud Run.
The services will likely gain popularity as customers deeply invested in containers see the light, Marsh said.
“I turned down too many clients last year that had container orchestration problems. That’s frankly a self-inflicted and uninteresting problem to solve in the era of serverless,” he said.
Containers’ allure is understandable. “As a logical and deployable construct, the simplicity is sexy,” Marsh said. “In practice, it is much more complicated.”
“Anything that allows companies to maintain the feeling of isolated and independent deployable components — mimicking our warm soft familiar blankie of a VM — with containers, but removes the headache, is going to see adoption,” he added.
In the backup market, 2019 started with a financial bang.
On back-to-back days in January, Rubrik announced a $261 million funding round, while Veeam disclosed that Insight Venture Partners invested an additional $500 million in the data backup and management vendor.
That data backup news set the tone for a busy year of more funding rounds, acquisitions, CEO changes, new products and key trends in a market that is constantly evolving.
Backup business busy with acquisitions, funding, leadership changes
Much like recent years, backup was big business in 2019.
Carbonite had one of the busiest years of all. In March, the data protection vendor acquired cybersecurity firm Webroot for $618.5 million, with a focus on fighting ransomware. In July, CEO Mohamad Ali left to take the same job at tech media company International Data Group, with board chairman Steve Munford filling the role at Carbonite on an interim basis. Then in November, following months of rumors of a possible sale, content management provider OpenText acquired Carbonite for $1.42 billion, to help expand its cloud offerings.
Commvault also transitioned to a new leader, as longtime CEO Bob Hammer stepped down and former Puppet CEO Sanjay Mirchandani stepped in. The company made its first acquisition in September, buying software-defined storage vendor Hedvig for $225 million to help converge primary and secondary storage for better data management.
Acronis became the latest unicorn, closing a $147 million funding round in September at a valuation of more than $1 billion. The company has shifted from a backup-focused product portfolio to a more comprehensive cyber protection platform. Like Carbonite, Acronis now has a major emphasis on cybersecurity.
Druva and its cloud-focused backup and recovery product set received a $130 million funding haul. Just a month later, the vendor acquired CloudLanes and its cloud migration technology.
Veeam Software, which is on the lookout for acquisitions, actually did the reverse this year. The vendor sold back AWS data protection provider N2WS, a company it acquired two years ago, to the original founders. Veeam is launching its own products focused on AWS and Azure backup.
In other data backup news developments:
Veritas Technologies acquired Aptare to improve its storage analytics and monitoring.
Spencer Kupferman took over as CEO of AWS data protection provider Cloud Daddy, a recent entrant into the market.
OwnBackup secured $23.25 million, its largest funding round, for expansion of its Salesforce data protection.
David Bennett, previously the chief revenue officer at Webroot, became the new CEO of backup and disaster recovery vendor Axcient.
SaaS backup continues its ascent
The software-as-a-service backup market remains one of the hottest in tech. The word is out that SaaS applications such as Salesforce, Google’s G Suite and Microsoft Office 365 need backup because these vendors typically have protection for their own infrastructure but not for your individual files.
Clumio came out of stealth in August with its cloud-based backup as a service. Noting that “SaaS is taking over,” Clumio CEO Poojan Kumar described his company’s founding vision as “building a data management platform on top of the public cloud.” The vendor originally offered protection for VMware on premises, VMware Cloud on AWS and native AWS services. While closing a $135 million funding round in November, Clumio pledged support for more public clouds, SaaS applications and containers, starting with Amazon Elastic Block Store protection.
Commvault launched a SaaS backup subsidiary, Metallic, with an emphasis on protecting servers and VMs, Office 365 and endpoints. The data protection vendor is aiming Metallic at smaller businesses than its usual enterprise customers.
In other notable data backup news on the SaaS front:
Druva enhanced its SaaS backup capabilities, adding restore options to its Office 365 protection and introducing backup for Slack and Microsoft Teams conversations.
Odaseva, a data protection vendor focused on Salesforce, unveiled a high-availability option for the customer relationship management provider.
The newly launched Actifio Go SaaS platform offers direct-to-cloud backup to AWS, Azure, Google, IBM and Wasabi public clouds.
Arcserve updated its Unified Data Protection product to provide granular, file-level backup and recovery for Office 365.
Veeam enhanced its Backup for Microsoft Office 365, the fastest growing product in the history of the company, to back up directly to the cloud to either Azure or AWS.
Container backup takes the spotlight
One area that emerged in 2019 is backup of containers. As Kubernetes workloads in particular increase in popularity, organizations will need specifically targeted protection. Newer vendors, including Kasten, Robin and Portworx, focus on Kubernetes protection and management. Products from other vendors, including IBM Spectrum Protect, tackle Kubernetes protection in addition to other capabilities.
Container and SaaS backup will likely increase in 2020. Organizations should continue to keep an eye on data backup news, as products and businesses are evolving at a dramatic pace.