Tag Archives: AWS’

Hybrid, cost management cloud trends to continue in 2020

If the dawn of cloud computing can be pegged to AWS’ 2006 launch of EC2, then the market has entered its gangly teenage years as the new decade looms.

While the metaphor isn’t perfect, some direct parallels can be seen in the past year’s cloud trends.

For one, there’s the question of identity. In 2019, public cloud providers extended services back into customers’ on-premises environments and developed services meant to accommodate legacy workloads, rather than emphasize transformation. 

Maturity remains a hurdle for the cloud computing market, particularly in the area of cost management and optimization. Some progress occurred on this front in 2019, but there’s much more work to be done by both vendors and enterprises.

Experimentation was another hallmark of 2019 cloud computing trends, with the continued move toward containerized workloads and serverless computing. Here’s a look back at some of these cloud trends, as well as a peek ahead at what’s to come in 2020.

Hybrid cloud evolves

Hybrid cloud has been one of the more prominent cloud trends for a few years, but 2019 saw key changes in how it is marketed and sold.

Companies such as Dell EMC, Hewlett Packard Enterprise and, to a lesser extent, IBM have scuttled or scaled back their public cloud efforts and shifted to cloud services and hardware sales. This trend has roots prior to 2019, but the changes took greater hold this year.

Holger MuellerHolger Mueller

Today, “there’s a battle between the cloud-haves and cloud have-nots,” said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.

Google, as the third-place competitor in public cloud, needs to attract more workloads. Its Anthos platform for hybrid and multi-cloud container orchestration projects openness but still ties customers into a proprietary system.

In November, Microsoft introduced Azure Arc, which extends Azure management tools to on-premises and cloud platforms beyond Azure, although the latter functionality is limited for now.

Earlier this month, AWS made the long-expected general availability of Outposts, a managed service that puts AWS-built server racks loaded with AWS software inside customer data centers to address issues such as low-latency and data residency requirements.

It’s similar in ways to Azure Stack, which Microsoft launched in 2017, but one key difference is that partners supply Azure Stack hardware. In contrast, Outposts has made AWS a hardware vendor and thus a threat to Dell/EMC, HPE and others who are after customers’ remaining on-premises IT budgets, Mueller said.

But AWS needs to prove itself capable of managing infrastructure inside customer data centers, with which those rivals have plenty of experience.

Looking ahead to 2020, one big question is whether AWS will join its smaller rivals by embracing multi-cloud. Based on the paucity of mentions of that term at re:Invent this year, and the walled-garden approach embodied by Outposts, the odds don’t look favorable.

Bare-metal options grow

Thirteen years ago, AWS launched its Elastic Compute Cloud (EC2) service with a straightforward proposition: Customers could buy VM-based compute capacity on demand. That remains a core offering of EC2 and its rivals, although the number of instance types has grown exponentially.

More recently, bare-metal instances have come into vogue. Bare-metal strips out the virtualization layer, giving customers direct access to the underlying hardware. It’s a useful option for workloads that can’t suffer the performance hit VMs carry and avoids the “noisy neighbor” problem that crops up with shared infrastructure.

Google rolled out managed bare-metal instances in November, following AWS, Microsoft, IBM and Oracle. Smaller providers such as CenturyLink and Packet also offer bare-metal instances. The segment overall is poised for significant growth, reaching more than $26 billion by 2025, according to one estimate.

Multiple factors will drive this growth, according to Deepak Mohan, an analyst with IDC.

Two of the biggest influences in IaaS today are enterprise workload movement into public cloud environments and cloud expansions into customers’ on-premises data centers, evidenced by Outposts, Azure Arc and the like, Mohan said.

The first trend has compelled cloud providers to support more traditional enterprise workloads, such as applications that don’t take well to virtualization or that are difficult to refactor for the cloud. Bare metal gets around this issue.

“As enterprise adoption expands, we expect bare metal to play an increasingly critical role as the primary landing zone for enterprise workloads as they transition into cloud,” Mohan said.

Cloud cost management gains focus

The year saw a wealth of activity around controlling cloud costs, whether through native tools or third-party applications. Among the more notable moves was Microsoft’s extension of Azure Cost Management to AWS, with support for Google Cloud expected next year.

But the standout development was AWS’ November launch of Savings Plans, which was seen as a vast improvement over its longstanding Reserved Instances offering.

Reserved Instances give big discounts to companies that are willing to make upfront spending commitments but have been criticized for inflexibility and a complex set of options.

Owen RogersOwen Rogers

“Savings Plans have massively reduced the complexity in gaining such discounts, by allowing companies to make commitments to AWS without having to be too prescriptive on the application’s specific requirements,” said Owen Rogers, who heads the digital economics unit at 451 Research. “We think this will appeal to enterprises and will eventually replace reserved instances as AWS’ de facto committed pricing model.”

The new year will see enterprises increasingly seek to optimize their costs, not just manage and report on them, and Savings Plans fit into this expectation, Rogers added.

If your enterprise hasn’t gotten serious about cloud cost management, doing so would be a good New Year’s resolution. There’s a general prescription for success in doing so, according to Corey Quinn, cloud economist at the Duckbill Group.

“Understand the goals you’re going after,” Quinn said. “What are the drivers behind your business?” Break down cloud bills into what they mean on a division, department and team-level basis. It’s also wise to start with the big numbers, Quinn said. “You need to understand that line item that makes up 40% of your bill.”

While some companies try to make cloud cost savings the job of many people across finance and IT, in most cases the responsibility shouldn’t fall on engineers, Quinn added. “You want engineers to focus on whether they can build a thing, and then cost-optimize it,” he said.

Serverless vs. containers debate mounts

One topic that could come with more frequency in 2020 is the debate over the relative merits of serverless computing versus containers.

Serverless advocates such as Tim Wagner, inventor of AWS Lambda, contend that a movement is underfoot.

At re:Invent, the serverless features AWS launched were not “coolness for the already-drank-the-Kool-Aid crowd,” Wagner said in a recent Medium post. “This time, AWS is trying hard to win container users over to serverless. It’s the dawn of a new ‘hybrid’ era.”

Another serverless expert hailed Wagner’s stance.

“I think the container trend, at its most mature state, will resemble the serverless world in all but execution duration,” said Ryan Marsh, a DevOps trainer with TheStack.io in Houston.

Anything that allows companies to maintain the feeling of isolated and independent deployable components … is going to see adoption.
Ryan MarshDevOps trainer, TheStack.io

The containers vs. serverless debate has raged for at least a couple of years, and the notion that neither approach can effectively answer every problem persists. But observers such as Wagner and Marsh believe that advances in serverless tooling will shift the discussion.

AWS Fargate for EKS (Elastic Kubernetes Service) became available at re:Invent. The offering provides a serverless framework that launches, scales and manages Kubernetes container clusters on AWS. Earlier this year, Google released a similar service called Cloud Run.

The services will likely gain popularity as customers deeply invested in containers see the light, Marsh said.

“I turned down too many clients last year that had container orchestration problems. That’s frankly a self-inflicted and uninteresting problem to solve in the era of serverless,” he said.

Containers’ allure is understandable. “As a logical and deployable construct, the simplicity is sexy,” Marsh said. “In practice, it is much more complicated.”

“Anything that allows companies to maintain the feeling of isolated and independent deployable components — mimicking our warm soft familiar blankie of a VM — with containers, but removes the headache, is going to see adoption,” he added.

Go to Original Article
Author:

AWS, NFL machine learning partnership looks at player safety

The NFL will use AWS’ AI and machine learning products and services to better simulate and predict player injuries, with the goal of ultimately improving player health and safety.

The new NFL machine learning and AWS partnership, announced during a press event Thursday with AWS CEO Andy Jassy and NFL Commissioner Roger Goodell at AWS re:Invent 2019, will change the game of football, Goodell said.

“It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game,” he said.

The NFL machine learning journey

The partnership builds off Next Gen Stats, an existing NFL and AWS agreement that has helped the NFL capture and process data on its players. That partnership, revealed back in 2017, introduced new sensors on player equipment and the football to capture real-time location, speed and acceleration data.

That data is then fed into AWS data analytics and machine learning tools to provide fans, broadcasters and NFL Clubs with live and on-screen stats and predictions, including expected catch rates and pass completion probabilities.

Taking data from that, as well as from other sources, including video feeds, equipment choice, playing surfaces, player injury information, play type, impact type and environmental factors, the new NFL machine learning and AWS partnership will create a digital twin of players.

AWS CEO Andy Jassy and NFL Commissioner Roger Goodell
AWS CEO Andy Jassy, left, and NFL Commissioner Roger Goodell announced a new AI and machine learning partnership at AWS re:Invent 2019.

The NFL began the project with a collection of different data sets from which to gather information, said Jeff Crandall, chairman of the NFL Engineering Committee, during the press event.

It wasn’t just passing data, but also “the equipment that players were wearing, the frequency of those impacts, the speeds the players were traveling, the angles that they hit one another,” he continued.

Typically used in manufacturing to predict machine outputs and potential breakdowns, a digital twin is essentially a complex virtual replica of a machine or person formed out of a host of real-time and historical data. Using machine learning and predictive analytics, a digital twin can be fed into countless virtual scenarios, enabling engineers and data scientists to see how its real-life counterpart would react.

The new AWS and NFL partnership will create digital athletes, or digital twins of a scalable sampling of players, that can be fed into infinite scenarios without risking the health and safety of real players. Data collected from these scenarios is expected to provide insights into changes to game rules, player equipment and other factors that could make football a safer game.

“For us, what we see the power here is to be able to take the data that we’ve created over the last decade or so” and use it, Goodell said. “I think the possibilities are enormous.”

Partnership’s latest move to enhance safety

It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game.
Roger GoodellCommissioner, NFL

New research in recent years has highlighted the extreme health risks of playing football. In 2017, researchers from the VA Boston Healthcare System and the Boston University School of Medicine published a study in the Journal of the American Medical Association that indicated football players are at a high risk for developing long-term neurological conditions.

The study, which did not include a control group, looked at the brains of high school, college and professional-level football players. Of the 111 NFL-level football players the researchers looked at, 110 of them had some form of degenerative brain disease.

The new partnership is just one of the changes the NFL has made over the last few years in an attempt to make football safer for its players. Other recent efforts include new helmet rules, and a recent $3 million challenge to create safer helmets.

The AWS and NFL partnership “really has a chance to transform player health and safety,” Jassy said.

AWS re:Invent, the annual flagship conference of AWS, was held this week in Las Vegas.

Go to Original Article
Author:

SageMaker Studio makes model building, monitoring easier

LAS VEGAS — AWS launched a host of new tools and capabilities for Amazon SageMaker, AWS’ cloud platform for creating and deploying machine learning models; drawing the most notice was Amazon SageMaker Studio, a web-based integrated development platform.

In addition to SageMaker Studio, the IDE for platform for building, using and monitoring machine learning models, the other new AWS products aim to make it easier for non-expert developers to create models and to make them more explainable.

During a keynote presentation at the AWS re:Invent 2019  conference here Tuesday, AWS CEO Andy Jassy described five other new SageMaker tools: Experiments, Model Monitor, Autopilot, Notebooks and Debugger.

“SageMaker Studio along with SageMaker Experiments, SageMaker Model Monitor, SageMaker Autopilot and Sagemaker Debugger collectively add lots more lifecycle capabilities for the full ML [machine learning] lifecycle and to support teams,” said Mike Gualtieri, an analyst at Forrester.

New tools

SageMaker Studio, Jassy claimed, is a “fully-integrated development environment for machine learning.” The new platform pulls together all of SageMaker’s capabilities, along with code, notebooks and datasets, into one environment. AWS intends the platform to simplify SageMaker, enabling users to create, deploy, monitor, debug and manage models in one environment.

Google and Microsoft have similar machine learning IDEs, Gualtieri noted, adding that Google plans for its IDE to be based on DataFusion, its cloud-native data integration service, and to be connected to other Google services.

SageMaker Notebooks aims to make it easier to create and manage open source Jupyter notebooks. With elastic compute, users can create one-click notebooks, Jassy said. The new tool also enables users to more easily adjust compute power for their notebooks and transfer the content of a notebook.

Meanwhile, SageMaker Experiments automatically captures input parameters, configuration and results of developers’ machine learning models to make it simpler for developers to track different iterations of models, according to AWS. Experiments keeps all that information in one place and introduces a search function to comb through current and past model iterations.

AWS CEO Andy Jassy talks about new Amazon SageMaker capabilitiesatre:Invent 2019
AWS CEO Andy Jassy talks about new Amazon SageMaker capabilities at re:Invent 2019

“It is a much, much easier way to find, search for and collect your experiments when building a model,” Jassy said.

As the name suggests, SageMaker Debugger enables users to debug and profile their models more effectively. The tool collects and monitors key metrics from popular frameworks, and provides real-time metrics about accuracy and performance, potentially giving developers deeper insights into their own models. It is designed to make models more explainable for non-data scientists.

SageMaker Model Monitor also tries to make models more explainable by helping developers detect and fix concept drift, which refers to the evolution of data and data relationships over time. Unless models are updated in near real time, concept drift can drastically skew the accuracy of their outputs. Model Monitor constantly scans the data and model outputs to detect concept drift, alerting developers when it detects it and helping them identify the cause.

Automating model building

With Amazon SageMaker Autopilot, developers can automatically build models without, according to Jassy, sacrificing explainability.

Autopilot is “AutoML with full control and visibility,” he asserted. AutoML essentially is the process of automating machine learning modeling and development tools.

The new Autopilot module automatically selects the correct algorithm based on the available data and use case and then trains 50 unique models. Those models are then ranked by accuracy.

“AutoML is the future of ML development. I predict that within two years, 90 percent of all ML models will be created using AutoML by data scientists, developers and business analysts,” Gualtieri said.

SageMaker Autopilot is a must-have for AWS.
Mike GualtieriAnalyst, Forrester

“SageMaker Autopilot is a must-have for AWS, but it probably will help” other vendors also, including such AWS competitors as DataRobot because the AWS move further legitimizes the automated machine learning approach, he continued.

Other AWS rivals, including Google Cloud Platform, Microsoft Azure, IBM, SAS, RapidMiner, Aible and H2O.ai, also have automated machine learning capabilities, Gualtieri noted.

However, according to Nick McQuire, vice president at advisory firm CCS Insight, some of the  new AWS capabilities are innovative.

“Studio is a great complement to the other products as the single pane of glass developers and data scientists need and its incorporation of the new features, especially Model Monitor and Debugger, are among the first in the market,” he said.

“Although AWS may appear late to the game with Studio, what they are showing is pretty unique, especially the positioning of the IDE as similar to traditional software development with … Experiments, Debugger and Model Monitor being integrated into Studio,” McQuire said. “These are big jumps in the SageMaker capability on what’s out there in the market.”

Google also recently released several new tools aimed at delivering explainable AI, plus a new product suite, Google Cloud Explainable AI.

Go to Original Article
Author:

AWS expands its cloud cost optimization portfolio

AWS’ latest tool aims to help customers save money and optimize their workloads on the cloud platform, and also expand AWS’ cost management capabilities to a broader base.

As an opt-in feature, Amazon EC2 now scans customer usage over the previous two weeks and creates “resource optimization recommendations” for actions to address idle and underutilized instances. AWS defines idle instances as ones with less than 1% of their maximum CPU utilization active, and underutilized instances as CPU activity between 1% and 40% of capacity, according to a blog post.

The system recommends customers shut off idle instances entirely. For underutilized ones, AWS simulates that same level of usage applied to a smaller instance in the same service tier, and shows customers cost savings to bundle multiple instances into one. Customers get a summary of potential resource optimizations, which includes estimates of monthly savings, and can also download lists of recommendations.

At present, the recommendations cover major EC2 instance families but not GPU-based ones, according to the blog.

AWS advances cloud cost optimization question

The new feature bears similarity at a glance to the likes of AWS Cost Explorer and AWS Trusted Advisor, but there are differences, and it should be welcomed by customers, analysts said.

Deepak Mohan, analyst at IDCDeepak Mohan

“This aligns with one of the top pain points customers highlight as they start scaling up their cloud usage, which is that optimal service selection and configuration are not easy, and suboptimal configuration results in high costs as usage increases,” said Deepak Mohan, an analyst with IDC.

This aligns with one of [cloud customers’] top pain points … optimal service selection and configuration are not easy, and suboptimal configuration results in high costs as usage increases.
Deepak MohanAnalyst, IDC

With resource optimization recommendations, AWS also presents cost management features to a broader set of customers, Mohan said.

Cost Explorer gives customers report-generation tools to examine their usage over time. It also includes forecasting capabilities, but Cost Explorer is more a means to examine the past.

Trusted Advisor has a broader remit, as it looks at not just cost issues but also security and governance, fault tolerance and performance improvements. The full feature set of Trusted Advisor is only available to customers with business and enterprise-level support plans, while the new capabilities are available to all customers at no charge, Mohan noted.

Moreover, Trusted Advisor alerts admins that an instance has a poor level of utilization, which might prompt them to investigate which instance might be better, said Owen Rogers, vice president of cloud transformation and digital economics at 451 Research. By comparison, these resource optimization recommendations tell admins which instance would be a better fit to keep the application performing well but also at a lower price point.

Owen Rogers, 451 ResearchOwen Rogers

“This is a nice free feature that I think many customers will take advantage of,” he said. “After all, if you can save money without impacting deliverables, why wouldn’t you?”

AWS has not achieved anything revolutionary here. Microsoft and Google have similar tools for cloud cost management, as well as third-party options from the likes of ParkMyCloud, VMWare CloudHealth and OpsRamp, Rogers added.

But AWS’ complexity with regard to prices and SKUs has long been a sore spot for customers. Its latest move ties generally into remarks Amazon CTO Werner Vogels made in a recent interview with TechTarget.

“I think there’s a big role for automation,” Vogels said. “I think helping customers make better choices there through automation and tools is definitely a path we are looking for.”

Go to Original Article
Author:

AWS SSO puts Amazon at the center of IT access

AWS’ latest service is another step in the company’s goal to be the hub for corporations’ IT activity.

AWS Single Sign-On (AWS SSO), added with little fanfare after AWS re:Invent 2017, is a welcome addition for many users. The service centralizes the management of multiple AWS accounts, as well as additional third-party applications tethered to those accounts.

AWS SSO uses AWS Organizations and can be extended with a configuration wizard to Security Assertion Markup Language (SAML) applications. It also comes with built-in integrations with popular services such as Box, Office 365, Salesforce and Slack.

Users of the service access AWS and outside applications through a single portal, within individually assigned access policies. Sign-in activities and administrative changes are tracked by AWS CloudTrail, and companies can audit employee use of those services themselves or use an outside service such as Splunk or Sumo Logic to analyze those logs.

Permissions to various Amazon cloud services and outside apps can be configured in AWS SSO for common IT job categories or by creating custom groupings. The service also connects to on-premises Microsoft Active Directory to identify credentials and manage which employees or groups of employees can access which AWS accounts.

The service has limitations. It’s currently confined to the U.S. East region in Virginia, and can’t be accessed through the AWS Command Line Interface or via an API. Also, any changes to permissions can only be made by a master account.

AWS has a reputation for going after segments of IT that it sees as vulnerable, and this could be a direct shot at some of the prominent SSO providers on the market. Okta in particular is popular among the enterprise market, so this free alternative from AWS could be attractive, said Adam Book, principal cloud engineer at Relus Technologies, an AWS consulting partner in Peachtree Corners, Ga.

For large organizations single sign-on is important. … Once you get into third-party apps your users don’t want to remember 50 different passwords.
Adam Bookprincipal cloud engineer, Relus Technologies

“You can manage all your apps in one place and not pay for a third party,” he said. “Amazon then becomes your one trusted source for everything.”

AWS solved some of the complexity around managing accounts when it enabled administrators to establish roles for users, but this simplifies things further with a single point to track work across development, QA and production accounts, Book said. It also helps to manage onboarding and removal of employees’ credentials based on their employment status.

“For large organizations single sign-on is important,” he said. “I don’t think it’s as much for the Amazon accounts, but once you get into third-party apps your users don’t want to remember 50 different passwords.”

Joe Emison, founder and CTO, BuildFaxJoe Emison

Others see AWS SSO as not just a way to unseat Okta, but to go after Active Directory as well. SSO can be used with or without the Microsoft directory service, which isn’t ideal for cloud environments despite an updated version in Microsoft Azure, said Joe Emison, founder and CTO of BuildFax, an AWS customer in Austin, Texas.

“Active Directory, at its core, is really based around the idea that everyone is going to be connected to a local network to start up their computer and connect to a master server and get rules and policies from there,” he said. “That’s nice if everyone goes into the office, but this is not the world we live in.”

Compared to AWS Identity and Access Management (IAM), Active Directory lacks fine-grained access control to assign permissions and can be difficult to integrate with SAML-based applications, Emison said. By incorporating IAM tools within SSO and extending that level of control to outside applications, AWS could eventually supplant Active Directory as organizations’ preferred means to manage employee access.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].