Tag Archives: Services

Cisco Webex Edge for Devices links on-prem endpoints to cloud

Businesses using on-premises video gear from Cisco can now get access to cloud services, while keeping their video infrastructure in place.

A new service, called Cisco Webex Edge for Devices, lets businesses connect on-premises video devices to cloud services like Webex Control Hub and the Webex Assistant. Customers get access to some cloud features but continue to host video traffic on their networks.

Many businesses aren’t ready to move their communications to the cloud. Vendors have responded by developing ways to mix on-premises and cloud technologies. Cisco Webex Edge for Devices is the latest offering of that kind.

“It gives users that cloudlike experience without the businesses having to fully migrate everything to the cloud,” said Zeus Kerravala, principal analyst at ZK Research.

Cisco wants to get as many businesses as possible to go all-in on the cloud. Webex Edge for Devices, introduced this month, tees up customers to make that switch. Companies will have the option of migrating their media services to the cloud after connecting devices to the service.

Webex Edge for Devices is available for no additional charge to businesses with an enterprise-wide Collaboration Flex Plan, a monthly per-user subscription. Alternatively, companies can purchase cloud licenses for the devices they want to register with the service for roughly $30 per device, per month. The service won’t work with gear that’s so old Cisco no longer supports it.

Video hardware linked to the cloud through the service will show up in the Webex Control Hub, a console for managing cloud devices. For on-premises devices, the control hub will provide diagnostic reports, usage data, and insight into whether the systems are online or offline.

Many businesses are already using a mix of on-premises and cloud video endpoints. Webex Edge for Devices will let those customers manage those devices from a single console. In the future, Cisco plans to add support for on-premises phones.

Businesses will also be able to sync on-premises video devices with cloud-based calendars from Microsoft and Google. That configuration will let the devices display a one-click join button for meetings scheduled on those calendars.

Another cloud feature unlocked by Webex Edge for Devices is the Webex Assistant. The service is an AI voice system that lets users join meetings, place calls and query devices with their voice.

In the future, Cisco plans to bring more cloud features to on-premises devices. Future services include People Insights, a tool that provides background information on meeting participants with information gleaned from the public internet.

Cisco first released a suite of services branded as Webex Edge in September 2018. The suite included Webex Edge Audio, Webex Edge Connect and Webex Video Mesh. The applications provide ways to use on-premises and cloud technologies in combination to improve the quality of audio and video calls.

Cisco’s release of Webex Edge for Devices underscores its strategy of supporting on-premises customers without forcing them to the cloud, said Irwin Lazar, analyst at Nemertes Research.

Go to Original Article
Author:

Public cloud vendors launch faulty services as race heats up

The public cloud services arena has turned a corner, introducing new challenges for customers, according to the latest edition of “Technology Radar,” a biannual report by global software consultancy ThoughtWorks. Competition has heated up, so top public cloud vendors are creating new cloud services at a fast clip. But in their rush to market, those vendors can roll out flawed services, which opens the door for resellers to help clients evaluate cloud options.

Public cloud has become a widely deployed technology, overcoming much of the resistance it had seen in the past. “Fears about items like security and sovereignty have been calmed,” noted Scott Shaw, director of technology for Asia Pacific region at ThoughtWorks. “Regulators have become more comfortable with the technology, so cloud interest has been turning into adoption.”

The cloud market shifts

With the sales of public cloud services rising, competition has intensified. Initially, Amazon Web Services dominated the market, but recently Microsoft Azure and Google Cloud Platform have been gaining traction among enterprise customers.

Corporations adopting public cloud have not had as much success as they had hoped for.
Scott ShawDirector of technology for Asia Pacific region, ThoughtWorks

One ripple effect is that the major public cloud providers have been trying to rapidly roll out differentiating new services. However, in their haste to keep pace, they can deliver services with rough edges and incomplete feature sets, according to ThoughtWorks.

Customers can get caught in this quicksand. “Corporations adopting public cloud have not had as much success as they had hoped for,” Shaw said.

Businesses try to deploy public cloud services based on the promised functionality but frequently hit roadblocks during implementations. “The emphasis on speed and product proliferation, through either acquisition or hastily created services, often results not merely in bugs but also in poor documentation, difficult automation and incomplete integration with vendors’ own parts,” the report noted.

Top public cloud vendors chart
The global public cloud market share in 2019.

Testing is required

ThoughtWorks recommended that organizations not assume all public cloud vendors’ services are of equal quality. They need to test out key capabilities and be open to alternatives, such as open source options and multi-cloud strategies.

Resellers can act as advisors to help customers make the right decisions as they consider new public cloud services, pointing out the strengths and flaws in individual cloud options, Shaw said.

To serve as advisors, however, resellers need in-depth, hands-on experience with the cloud services. “Channel partners cannot simply rely on a feature checklist,” Shaw explained. “To be successful, they need to have worked with the service and understand how it operates in practice and not just in theory.”

Go to Original Article
Author:

AWS security faces challenges after a decade of dominance

Amazon Web Services has a stranglehold on the public cloud market, but the company’s dominance in cloud security is facing new challenges.

The world’s largest cloud provider earned a reputation over the last 10 years as an influential leader in IaaS security, thanks to introducing products such as AWS Identity & Access Management and Key Management Service in the earlier part of the decade to more recent developments in event-driven security. AWS security features helped the cloud service provider establish its powerful market position; according to Gartner, AWS in 2018 earned an estimated $15.5 billion in revenue for nearly 48% of the worldwide public IaaS market.

But at the re:Invent 2019 conference last month, many of the new security tools and features announced were designed to fix existing issues, such as misconfigurations and data exposures, rather than push AWS security to new heights. “There wasn’t much at re:Invent that I’d call security,” said Colin Percival, founder of open source backup service Tarsnap and an AWS Community Hero, via email. “Most of what people are talking about as security improvements address what I’d call misconfiguration risk.”

Meanwhile, Microsoft has not only increased its cloud market share but also invested heavily in new Azure security features that some believe rival AWS’ offerings. Rich Mogull, president and analyst at Securosis, said there are two sides to AWS security — the inherent security of the platform’s architecture, and the additional tools and products AWS provides to customers.

“In terms of the inherent security of the platform, I still think Amazon is very far ahead,” he said, citing AWS’ strengths such as availability zones, segregation, and granular identity and access management. “Microsoft has done a lot with Azure, but Amazon still has a multi-year lead. But when it comes to security products, it’s more of a mixed bag.”

Most of what people are talking about as [AWS] security improvements address what I’d call misconfiguration risk.
Colin PercivalFounder, Tarsnap

Microsoft has been able to close the gap in recent years with the introduction of its own set of products and tools that compete with AWS security offerings, he said. “Azure Security Center and AWS Security Hub are pretty comparable, and both have strengths and weaknesses,” Mogull said. “Azure Sentinel is quite interesting and seems more complete than AWS Detective.”

New tools, old problems

Arguably the biggest AWS security development at re:Invent was a new tool designed to fix a persistent problem for the cloud provider: accidental S3 bucket exposures. The IAM Access Analyzer, which is part of AWS’ Identity and Access Management (IAM) console, alerts users when an S3 bucket is possibly misconfigured to allow public access via the internet and lets them block such access with one click.

AWS had previously made smaller moves, including changes to S3 security settings and interfaces, to curb the spate of high-profile and embarrassing S3 exposures in recent years. IAM Access Analyzer is arguably the strongest move yet to resolve the ongoing problem.

“They created the S3 exposure issue, but they also fixed it,” said Jerry Gamblin, principal security engineer at vulnerability management vendor Kenna Security, which is an AWS customer. “I think they’ve really stepped up in that regard.”

Still, some AWS experts feel the tool doesn’t fully resolve the problem. “Tools like IAM Access Analyzer will definitely help some people,” Percival said, “but there’s a big difference between warning people that they screwed up and allowing people to make systems more secure than they could previously.”

Scott Piper, an AWS security consultant and founder of Summit Route in Salt Lake City, said “It’s yet another tool in the toolbelt and it’s free, but it’s not enabled by default.”

There are other issues with IAM Access Analyzer. “With this additional information, you have to get that to the customer in some way,” Piper said. “And doing that can be awkward and difficult with this service and others in AWS like GuardDuty, because it doesn’t make cross-region communication very easy.”

For example, EC2 regions are isolated to ensure the highest possible fault tolerance and stability for customers. But Piper said the isolation presents challenges for customers using multiple regions because it’s difficult to aggregate GuardDuty alerts to a single source, which requires security teams to analyze “multiple panes of glass instead of one.”

Metadata headaches

AWS recently addressed another security issue that became a high-profile concern for enterprises following the Capital One breach last summer. The attacker in that exploited an SSRF vulnerability to access the AWS metadata service for company’s EC2 instances, which allowed them to obtain credentials contained in the service.

The Capital One breach led to criticism from security experts as well as lawmakers such as Sen. Ron Wyden (D-Ore.), who questioned why AWS hadn’t addressed SSRF vulnerabilities for its metadata service. The lack of security around the metadata service has concerned some AWS experts for years; in 2016, Percival penned a blog post titled “EC2’s most dangerous feature.”

“I think the biggest problem Amazon has had in recent years — judging by the customers affected — is the lack of security around their instance metadata service,” Percival told SearchSecurity.

In November, AWS made several updates to the metadata service to prevent unauthorized access, including the option to turn off access to the service altogether. Mogull said the metadata service update was crucial because it improved security around AWS account credentials.

But like other AWS security features, the metadata service changes are not enabled by default. Percival said enabling the update by default would’ve caused issues for enterprise applications and services that rely on the existing version of the service. “Amazon was absolutely right in making their changes opt-in since if they had done otherwise, they would have broken all of the existing code that uses the service,” he said. “I imagine that once more or less everyone’s code has been updated, they’ll switch this from opt-in to opt-out — but it will take years before we get to that point.”

Percival also said the update is “incomplete” because it addresses common misconfigurations but not software bugs. (Percival is working on an open source tool that he says will provide “a far more comprehensive fix to this problem,” which he hopes to release later this month.)

Still, Piper said the metadata service update is an important step for AWS security because it showed the cloud provider was willing to acknowledge there was a problem with the existing service. That willingness and responsiveness hasn’t always been there in the past, he said.

“AWS has historically had the philosophy of providing tools to customers, and it’s kind of up to customers to use them and if they shoot themselves in the foot, then it’s the customers’ fault,” Piper said. “I think AWS is starting to improve and change that philosophy to help customers more.”

AWS security’s road ahead

While the metadata service update and IAM Access Analyzer addressed lingering security issues, experts highlighted other new developments that could strengthen AWS’ position in cloud security.

AWS Nitro Enclaves, for example, is a new EC2 capability introduced at re:Invent 2019 that allows customers to create isolated instances for sensitive data. The Nitro Enclaves, which will be available in preview this year, are virtual machines attached to EC2 instances but have CPU and memory isolation from the instances and can be accessed only through secure local connections.

“Nitro Enclaves will have a big impact for customers because of its isolation and compartmentalization capabilities” which will give enterprises’ sensitive data an additional layer of protection against potential breaches, Mogull said.

Percival agreed that Nitro Enclaves could possibly “raise the ceiling,” for AWS Security, though he cautioned against using them. “Enclaves are famously difficult for people to use correctly, so it’s hard to predict whether they will make a big difference or end up being another of the many ‘Amazon also has this feature, which nobody ever uses’ footnotes.”

Experts also said AWS’ move to strengthen its ARM-based processor business could have major security implications. The cloud provider announced at re:Invent 2019 that it will be launching EC2 instances that run on its new, customized ARM chips, dubbed Graviton2.

Gamblin said the Graviton2 processors are a security play in part because of recent microprocessor vulnerabilities and side channel attacks like Meltdown and Spectre. While some ARM chips were affected by both Meltdown and Spectre, subsequent side channel attacks and Spectre variants have largely affected x86 processors.

“Amazon doesn’t want to rely on other chips that may be vulnerable to side channel attacks and may have to be taken offline and rebooted or suffer performance issues because of mitigations,” Gamblin said.

Percival said he was excited by the possibility of the cloud provider participating in ARM’s work on the “Digital Security by Design” initiative, a private-sector partnership with the UK that is focused in part on fundamentally restructuring — and improving — processor security. The results of that project will be years down the road, Percival said, but it would show a commitment from AWS to once again raising the bar for security.

“If it works out — and it’s a decade-long project, which is inherently experimental in nature — it could be the biggest step forward for computer security in a generation.”

Go to Original Article
Author:

IT services acquisitions get early 2020 start

IT services acquisitions got off to a fast start in 2020 with at least three transactions surfacing in the first week of the new year.

The early activity suggests the brisk pace of deals among services providers in 2019 may persist a bit longer. Consider the following:

IT services acquisitions: Deal drivers

The recent IT services acquisition emphasize the value buyers place on vertical market expertise, technology skill sets, software development and geographic reach.

Perficient’s purchase of MedTouch, based in Somerville, Mass., has boosted Perficient’s healthcare business revenue by nearly 10%, according to vice president Ed Hoffman. The acquisition adds 50 employees to the company’s roster.

Perficient’s healthcare business, the company’s largest vertical market, delivers business optimization and customer experience services to payer, provider and pharmaceutical organizations. MedTouch adds digital marketing “solutions and services focused on patient acquisition, customer experience, patient engagement and loyalty, and physician marketing,” Hoffman said.

MedTouch also contributes to Perficient’s customer experience (CX) work. Perficient’s Sitecore practice designs, builds and delivers websites based on the Sitecore Experience Platform. MedTouch brings experience with Sitecore, Acquia and other CX platforms, Hoffman noted.

Quisitive, meanwhile, has expanded its Microsoft technology resources and its geographic reach through its acquisition of Los Altos, Calif.-based Menlo Technologies. Menlo provides capabilities in Azure, Microsoft Office 365 and Microsoft Dynamics, along with software development services.

The Menlo Technologies deal furthers the company’s goal of becoming a Microsoft solutions provider spanning North America. Quisitive launched that strategy in 2018  and has since expanded to eight North American offices from its original base of operations in Dallas and Denver.

Mike ReinhartMike Reinhart

Quisitive CEO Mike Reinhart said the Menlo Technologies acquisition and its 2019 purchase of Corporate Renaissance Group “give us the fuel necessary to drive organic growth” as it builds its Microsoft business.

“These two acquisitions were critical in the sense that it added deep expertise in Microsoft Dynamics 365 capabilities, first-party SaaS IP, offshore development capability and full U.S. geographic coverage,” Reinhart said.

Ahead’s acquisition of Platform Consulting Group, based in Denver, marks the company’s fourth transaction. In October 2019, Ahead merged with Data Blue and acquired Sovereign Systems. The company also purchased Link Solutions Group last year.

Platform Consulting Group focuses on cloud-native application development and modern development frameworks, targeting enterprise clients. That customers are working to rationalize and refactor applications or build new applications with cloud-native patterns influenced Ahead’s purchase of Platform.

“This was part of the core thesis for our acquisition,” said Eric Kaplan, CTO at Ahead. “In addition, the lines are blurring between infrastructure and applications and our clients are looking for help from partners who can provide an end-to-end solution.”

More deals on tap?

Industry executives suggested the current run of IT services acquisitions looks set to continue in 2020.

We expect continued consolidation in the space in 2020 and remain in active dialogue with many firms.
Ed HoffmanVice president, Perficient

“We expect continued consolidation in the space in 2020 and remain in active dialogue with many firms,” Perficient’s Hoffman said.

He said Perficient is “highly selective when it comes to M&A,” emphasizing industry expertise, partner ecosystems, corporate cultures and values, and “exceptionally talented workforces.”

Reinhart also pointed to more consolidation. “Microsoft is exploding with growth in their cloud offerings and the partner ecosystem continues to be fragmented,” he said. Fragmentation, he added, opens an opportunity for Quisitive to look at regional and point-solution partners. Microsoft has said it has more than 64,000 cloud partners.

Quisitive, Reinhart said, “will be very surgical” in its acquisition approach, making sure targeted companies offer differentiated market strength or add to Quisitive’s geographic expansion and Microsoft specialization.

“I believe M&A activity will be strong, but highly correlated to overall markets,” Kaplan noted.

In Kaplan’s view, valuations in particular markets are inflated. “Segments of the market where skills are at a premium and where repeatable solutions exist will obviously command a premium,” he added.

Go to Original Article
Author:

AWS Outposts vs. Azure Stack vs. HCI

Giants Amazon and Microsoft offer cloud products and services that compete in areas usually reserved for the strengths that traditional hyper-converged infrastructure platforms bring to the enterprise IT table. These include hybrid cloud offerings AWS Outposts, which Amazon made generally available late last year, and Azure Stack from Microsoft.

An integrated hardware and software offering, Azure Stack is designed to deliver Microsoft Azure public cloud services to enable enterprises to construct hybrid clouds in a local data center. It delivers IaaS and PaaS for organizations developing web apps. By sharing its code, APIs and management portal with Microsoft Azure, Azure Stack provides a common platform to address hybrid cloud issues, such as maintaining consistency between cloud and on-premises environments. Stack is for those who want the benefits of a cloud-like platform but must keep certain data private due to regulations or some other constraint.

AWS Outposts is Amazon’s on-premises version of its IaaS offering. Amazon targets AWS Outposts at those who want to run workloads on Amazon Web Services, but instead of in the cloud, do so inside their own data centers to better meet regulatory requirements and, for example, to reduce latency.

Let’s delve deeper into AWS Outposts vs. Azure Stack to better see how they compete with each other and your typical hyper-converged infrastructure (HCI) deployment.

hybrid cloud storage use cases

What is AWS Outposts?

AWS Outposts is Amazon’s acknowledgment that most enterprise class organizations prefer hybrid cloud to a public cloud-only model. Amazon generally has acted solely as a hyperscale public cloud provider, leaving its customers’ data center hardware needs for other vendors to handle. With AWS Outposts, however, Amazon is — for the first time — making its own appliances available for on-premises use.

AWS Outposts customers can run AWS on premises. They can also extend their AWS virtual private clouds into their on-premises environments, so a single virtual private cloud can contain both cloud and data center resources. That way, workloads with low-latency or geographical requirements can remain on premises while other workloads run in the Amazon cloud. Because Outposts is essentially an on-premises extension of the Amazon cloud, it also aims to ease the migration of workloads between the data center and the cloud.

What is Microsoft Azure Stack?

Although initially marketed as simply a way to host Azure services on premises, Azure Stack has evolved into a portfolio of products. The three products that make up the Azure Stack portfolio include Azure Stack Edge, Azure Stack Hub and Azure Stack HCI.

Azure Stack Edge is a cloud-managed appliance that enables you to run managed virtual machine (VM) and container workloads on premises. While this can also be done with Windows Server, the benefit to using Azure Stack Edge is workloads can be managed with a common tool set, whether they’re running on premises or in the cloud.

Azure Stack Hub is used for running cloud applications on premises. It’s mostly for situations in which data sovereignty is required or where connectivity isn’t available.

As its name implies, Azure Stack HCI is a version of Azure Stack that runs on HCI hardware.

AWS Outposts vs. Azure Stack vs. HCI

To appreciate how AWS Outposts competes with traditional HCI, consider common HCI use cases. HCI is often used as a virtualization platform. While AWS Outposts will presumably be able to host Elastic Compute Cloud virtual machine instances, the bigger news is that Amazon is preparing to release a VMware-specific version of Outposts in 2020. The VMware Cloud on AWS Outposts will allow a managed VMware software-defined data center to run on the Outposts infrastructure.

Organizations are also increasingly using HCI as a disaster recovery platform. While Amazon isn’t marketing Outposts as a DR tool, the fact that Outposts acts as a gateway between on-premises services and services running in the Amazon cloud means the platform will likely be well positioned as a DR enabler.

Many organizations have adopted hyper-converged systems as a platform for running VMs and containers. Azure Stack Edge may end up displacing some of those HCIs if an organization is already hosting VMs and containers in the Azure cloud. As for Azure Stack Hub, it seems unlikely that it will directly compete with HCI, except possibly in some specific branch office scenarios.

The member of the Azure Stack portfolio that’s most likely to compete with traditional hyper-convergence is Azure Stack HCI. It’s designed to run scalable VMs and provide those VMs with connectivity to Azure cloud services. These systems are being marketed for use in branch offices and with high-performance workloads.

Unlike first-generation HCI systems, Azure Stack HCI will provide scalability for both compute and storage. This could make it a viable replacement for traditional HCI platforms.

In summary, when it comes to AWS Outposts vs. Azure Stack or standard hyper-convergence, all three platforms have their merits, without any one being clearly superior to the others. If an organization is trying to choose between the three, then my advice would be to choose the platform that does the best job of meshing with the existing infrastructure and the organization’s operational requirements. If the organization already has a significant AWS or Azure footprint, then Outposts or Azure Stack would probably be a better fit, respectively. Otherwise, traditional HCI is probably going to entail less of a learning curve and may also end up being less expensive.

Go to Original Article
Author:

US Army HR expects migration to Azure cloud will cut IT costs

The U.S Army is moving its civilian HR services from on-premises data centers to Microsoft’s cloud. The migration to Azure has the makings of a big change. Along with shifting Army HR services to the cloud, it plans to move off some of its legacy applications. 

It’s a move that the Army said will give it more flexibility and reduce its costs.

The Army Civilian Human Resources Agency (CHRA) is responsible for supporting approximately 300,000 Army civilian employees and 33,000 Department of Defense employees. It provides a full-range of HR services.

The migration to Azure was noted in a contract announcement by Accelera Solutions Inc., a  systems integrator based in Fairfax, Va. The $40.4 million Army contract is for three years. The firm is a Microsoft federal cloud partner.

The federal government, including the Department of Defense, is broadly consolidating data centers and shifting some systems to the cloud.

Shift to cloud will improve HR capabilities

The Army said it is moving its civilian HR services to the cloud for three reasons. The Army “has determined that the cloud is the most effective way to host CHRA operated programs,” said Matthew Leonard, an Army spokesperson, in an email. It also needs “a more agile operating environment,” he said.

The third benefit of migrating to Azure “will allow for improved overall capabilities at lower cost,” Leonard said. “We will not need to expend resources to maintain data centers and expensive hardware,” he said.

Some of the Army’s savings will come by turning off resources outside of business hours, such as those used for development.

The Army didn’t provide an estimate of cost savings. But the Defense Department, in budget documents, has estimated cumulative data center consolidation savings of $751 million from 2017 to 2024.

Our goal is to significantly reduce the number of applications through the use of modern, out-of-the-box platforms.
Matthew LeonardSpokesperson, Army

Some existing Army HR applications will undergo a migration to Azure, but new cloud-based HR applications will be also be adopted as part of this shift. 

“Our goal is to significantly reduce the number of applications through the use of modern, out-of-the-box platforms,” Leonard said. Overtime, the Army plans to move other applications to the cloud.

Accelera declined to comment on the award, but in its announcement said its work includes migrating the Army’s HR applications from on-premises data center to the Azure cloud. It will also operate the cloud environment.

Microsoft recently was awarded a broader Defense Department contract to host military services in its Azure cloud. The Joint Enterprise Defense Infrastructure contract, or JEDI, is estimated at $10 million over 10 years. AWS, a JEDI contract finalist, is challenging the award in court. 

“The CHRA cloud initiative does seem to be driven more by the data center consolidation initiative that’s been around since the Obama administration, and much less by the current flap over JEDI,” said Ray Bjorklund, president of government IT market research firm BirchGrove Consulting LLC in Maryland. Migration to the cloud has been a “recurring method” of IT consolidation, he said. 

Go to Original Article
Author:

AWS, NFL machine learning partnership looks at player safety

The NFL will use AWS’ AI and machine learning products and services to better simulate and predict player injuries, with the goal of ultimately improving player health and safety.

The new NFL machine learning and AWS partnership, announced during a press event Thursday with AWS CEO Andy Jassy and NFL Commissioner Roger Goodell at AWS re:Invent 2019, will change the game of football, Goodell said.

“It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game,” he said.

The NFL machine learning journey

The partnership builds off Next Gen Stats, an existing NFL and AWS agreement that has helped the NFL capture and process data on its players. That partnership, revealed back in 2017, introduced new sensors on player equipment and the football to capture real-time location, speed and acceleration data.

That data is then fed into AWS data analytics and machine learning tools to provide fans, broadcasters and NFL Clubs with live and on-screen stats and predictions, including expected catch rates and pass completion probabilities.

Taking data from that, as well as from other sources, including video feeds, equipment choice, playing surfaces, player injury information, play type, impact type and environmental factors, the new NFL machine learning and AWS partnership will create a digital twin of players.

AWS CEO Andy Jassy and NFL Commissioner Roger Goodell
AWS CEO Andy Jassy, left, and NFL Commissioner Roger Goodell announced a new AI and machine learning partnership at AWS re:Invent 2019.

The NFL began the project with a collection of different data sets from which to gather information, said Jeff Crandall, chairman of the NFL Engineering Committee, during the press event.

It wasn’t just passing data, but also “the equipment that players were wearing, the frequency of those impacts, the speeds the players were traveling, the angles that they hit one another,” he continued.

Typically used in manufacturing to predict machine outputs and potential breakdowns, a digital twin is essentially a complex virtual replica of a machine or person formed out of a host of real-time and historical data. Using machine learning and predictive analytics, a digital twin can be fed into countless virtual scenarios, enabling engineers and data scientists to see how its real-life counterpart would react.

The new AWS and NFL partnership will create digital athletes, or digital twins of a scalable sampling of players, that can be fed into infinite scenarios without risking the health and safety of real players. Data collected from these scenarios is expected to provide insights into changes to game rules, player equipment and other factors that could make football a safer game.

“For us, what we see the power here is to be able to take the data that we’ve created over the last decade or so” and use it, Goodell said. “I think the possibilities are enormous.”

Partnership’s latest move to enhance safety

It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game.
Roger GoodellCommissioner, NFL

New research in recent years has highlighted the extreme health risks of playing football. In 2017, researchers from the VA Boston Healthcare System and the Boston University School of Medicine published a study in the Journal of the American Medical Association that indicated football players are at a high risk for developing long-term neurological conditions.

The study, which did not include a control group, looked at the brains of high school, college and professional-level football players. Of the 111 NFL-level football players the researchers looked at, 110 of them had some form of degenerative brain disease.

The new partnership is just one of the changes the NFL has made over the last few years in an attempt to make football safer for its players. Other recent efforts include new helmet rules, and a recent $3 million challenge to create safer helmets.

The AWS and NFL partnership “really has a chance to transform player health and safety,” Jassy said.

AWS re:Invent, the annual flagship conference of AWS, was held this week in Las Vegas.

Go to Original Article
Author:

4 SD-WAN vendors integrate with AWS Transit Gateway

Several software-defined WAN vendors have announced integration with Amazon Web Services’ Transit Gateway. For SD-WAN users, the integrations promise simplified management of policies governing connectivity among private data centers, branch offices and AWS virtual networks.

Stitching together workloads across cloud and corporate networks is complex and challenging. AWS tackles the problem by making AWS Transit Gateway the central router of all traffic emanating from connected networks.

Cisco, Citrix Systems, Silver Peak and Aruba, a Hewlett Packard Enterprise Company, launched integrations with the gateway this week. The announcements came after AWS unveiled the AWS Transit Gateway at its re:Invent conference in Las Vegas.

SD-WAN vendors lining up quickly to support the latest AWS integration tool didn’t surprise analysts. “The ease and speed of integration with leading IaaS platforms are key competitive issues for SD-WAN for 2020,” said Lee Doyle, the principal analyst for Doyle Research.

By acting as the network hub, Transit Gateway reduces operational costs by simplifying network management, according to AWS. Before the new service, companies had to make individual connections between networks outside of AWS and those serving applications inside the cloud provider.

The potential benefits of Transit Gateway made connecting to it a must-have for SD-WAN suppliers. However, tech buyers should pay close attention to how each vendor configures its integration.

“SD-WAN vendors have different ways of doing things, and that leads to some solutions being better than others,” Doyle said.

What the 4 vendors are offering

Cisco said its integration would let IT teams use the company’s vManage SD-WAN controller to administer connectivity from branch offices to AWS. As a result, engineers will be able to apply network segmentation and data security policies universally through the Transit Gateway.

Aruba will let customers monitor and manage connectivity either through the Transit Gateway or Aruba Central. The latter is a cloud-based console used to control an Aruba-powered wireless LAN.

Silver Peak is providing integration between the Unity EdgeConnect SD-WAN platform and Transit Gateway. The link will make the latter the central control point for connectivity.

Finally, Citrix’s Transit Gateway integration would let its SD-WAN orchestration service connect branch offices and data centers to AWS. The connections will be particularly helpful to organizations running Citrix’s virtual desktops and associated apps on AWS.

Go to Original Article
Author:

Amazon cloud database and data analytics expand

Amazon Web Services is quite clear about it: it wants organizations of all sizes, with nearly any use case, to run databases in the cloud.

At the AWS re:Invent 2019 conference in Las Vegas, the cloud giant outlined the Amazon cloud database strategy, which hinges on wielding multiple purpose-built offerings for different use cases. 

AWS also revealed new services on Dec. 3, the first day of the conference, including the Amazon Managed Apache Cassandra Service, a supported cloud version of the popular Cassandra NoSQL database. The vendor also unveiled several new features for the Amazon Redshift data warehouse, providing enhanced data management and analytics capabilities.

“Quite simply, Amazon is looking to provide one-stop shopping for all data management and analytics needs on AWS,” said Carl Olofson, an analyst at IDC. “For those who are all in for AWS, this is all good. For their competitors, such as Snowflake competing with Redshift and DataStax competing with the new Cassandra service, this will motivate a stronger competitive effort.”

Amazon cloud database strategy

AWS CEO Andy Jassy, in his keynote, detailed the rationale behind Amazon’s cloud database strategy and why one database isn’t enough.

“A lot of companies primarily use relational databases for every one of their workloads, and the day of customers doing that has come and gone,” Jassy said.

There is too much data, cost and complexity involved in using a relational database for all workloads. That has sparked demand for purpose-built databases, according to Jassy.

ndy Jassy, CEO of AWS, speaks on stage at AWS re:Invent 2019 conference
AWS CEO Andy Jassy gives keynote at AWS annual user conference

For example, Jassy noted that ride sharing company Lyft has millions of drivers and geolocation coordinates, which isn’t a good fit for a relational database.

For the Lyft use case and others like it, there is a need for a fast, low-latency key value store, which is why AWS has the DynamoDB database. For workloads that require sub-microsecond latency, an in-memory database is best, and that is where ElastiCache fits in. For those looking to connect data across multiple big datasets, a graph database is a good option, which is what the Amazon Neptune service delivers. DocumentDB, on the other hand, is a document database, and  is intended for those who work with documents and JSON.

If you want the right tool for the right job that gives you differentiated performance productivity and customer experience, you want the right purpose-built database for that job.
Andy JassyCEO, AWS

“Swiss Army knives are hardly ever the best solution for anything other than the most simple tasks,” Jassy said, referring to the classic multi-purpose tool. “If you want the right tool for the right job that gives you differentiated performance productivity and customer experience, you want the right purpose-built database for that job.”

Amazon Apache Managed Cassandra

While AWS offers many different databases as part of the Amazon cloud database strategy, one variety it did not possess was Apache Cassandra, a popular open source NoSQL database.

It’s challenging to manage and scale Cassandra, which is why Jassy said he sees a need for a managed version running as an AWS service. The Apache Managed Cassandra launched as a preview on Dec. 3 with general availability set for sometime in 2020.

With the managed service there are no clusters for users to manage, and the platform provides single-digit millisecond latency, Jassy noted. He added that existing Cassandra tools and drivers will all work, making it easier for users to migrate on-premises Cassandra workloads to the cloud.

Redshift improvements

AWS also detailed a series of moves at the conference that enhance its Redshift data warehouse platform. Among the new features Jassy talked about was Lake House, which enables data queries not just in local Redshift nodes but also across multiple data lakes and S3 cloud storage buckets.

“Not surprisingly, as people start querying across both Redshift and S3 they also want to be able to query across their operational databases where a lot of important data sets live,” Jassy said. “So today, we just released something called federated query which now enables users to query across Redshift, S3 and our relational database services.”

Storage and compute for data warehouse are closely related, but there is often a need to scale storage and compute independently. To that end, AWS announced as part of the Amazon cloud database strategy its new Redshift RA3 instances with managed storage. Jassy explained that as users exhaust the amount of storage available in a Redshift local instance, the RA3 service will move the less frequently accessed data over to S3.

Redshift AQUA

As data is spread across different resources, it generates a need to accelerate query performance. Jassy introduced the new Advanced Query Accelerator (AQUA) for Redshift help meet that challenge.

Jassy said that AQUA provides an innovative way to do hardware accelerated cache to improve query performance. With AQUA, AWS has built a high-speed cache architecture on top of S3 that scale out in parallel to many different nodes. Each of the nodes host custom-designed AWS processors to speed up operations.

“This makes your processing so much faster that you can actually do the compute on the raw data without having to move it,” Jassy said.

Go to Original Article
Author:

AWS moves into quantum computing services with Braket

Amazon debuted a preview version of its quantum computing services this week, along with a new quantum computing research center and lab where AWS cloud users can work with quantum experts to identify practical, short-term applications.

The new AWS quantum computing managed service, called Amazon Braket, is aimed initially at scientists, researchers and developers, giving them access to quantum systems provided by IonQ, D-Wave and Rigetti.

Amazon’s quantum computing services news comes less than a month after Microsoft disclosed it is developing a chip capable of running quantum software. Microsoft also previewed a version of its Azure Quantum Service and struck partnerships with IonQ and Honeywell to help deliver the Azure Quantum Service.

In November, IBM said its Qiskit QC development framework supports IonQ’s ion trap technology, used by IonQ and Alpine Quantum Technologies.

Google recently claimed it was the first quantum vendor to achieve quantum supremacy — the ability to solve complex problems that classical systems either can’t solve or would take them an extremely long time to solve. Company officials said it represented an important milestone.

In that particular instance, Google’s Sycamore processor solved a difficult problem in just 200 seconds — a problem that would take a classical computer 10,000 years to solve. The claim was met with a healthy amount of skepticism by some competitors and other more objective sources as well. Most said they would reserve judgement on the results until they could take a closer look at the methodology involved.

Cloud services move quantum computing forward

Peter Chapman, CEO and president of IonQ, doesn’t foresee any conflicts with his respective agreements with rivals Microsoft and AWS. AWS jumping into the fray with Microsoft and IBM will help push quantum computing closer to the limelight and make users more aware of the technology’s possibilities, he said.

“There’s no question AWS’s announcements give greater visibility to what’s going on with quantum computing,” Chapman said. “Over the near term they are looking at hybrid solutions, meaning they will mix quantum and classical algorithms making [quantum development software] easier to work with,” he said.

There’s no question AWS’s announcements will give greater visibility to what’s going on with quantum computing.
Peter ChapmanCEO and president, IonQ

Microsoft and AWS are at different stages of development, making it difficult to gauge which company has advantages over the other. But what Chapman does like about AWS right now is the set of APIs that allows a developer’s application to run across the different quantum architectures of IonQ (ion trap), D-Wave (quantum annealing) and Rigetti (superconducting chips).

“At the end of the day it’s not how many qubits your system has,” Chapman said. “If your application doesn’t run on everyone’s hardware, users will be disappointed. That’s what is most important.”

Another analyst agreed that the sooner quantum algorithms can be melded with classical algorithms to produce something useful in an existing corporate IT environment, the faster quantum computing will be accepted.

“If you have to be a quantum expert to produce anything meaningful, then whatever you do produce stays in the labs,” said Frank Dzubeck, president of Communications Network Architects, Inc. “Once you integrate it with the classical world and can use it as an adjunct for what you are doing right now, that’s when [quantum technology] grows like crazy.”

Microsoft’s Quantum Development Kit, which the company open sourced earlier this year, also allows developers to create applications that operate across a range of different quantum architectures. Like AWS, Microsoft plans to combine quantum and classical algorithms to produce applications and services aimed at the scientific markets and ones that work on existing servers.

One advantage AWS and Microsoft provide for smaller quantum computing companies like IonQ, according to Chapman, is not just access to their mammoth user bases, but support for things like billing.

“If customers want to run something on our computers, they can just go to their dashboard and charge it to their AWS account,” Chapman said. “They don’t need to set up an account with us. We also don’t have to spend tons of time on the sales side convincing Fortune 1000 users to make us an approved vendor. Between the two of them [Microsoft and AWS], they have the whole world signed up as approved vendors,” he said.

The mission of the AWS Center for Quantum Computing will be to solve longer-term technical problems using quantum computers. Company officials said they have users ready to begin experimenting with the newly minted Amazon Braket but did not identify any users by name.

The closest they came was a prepared statement by Charles Toups, vice president and general manager of Boeing’s Disruptive Computing and Networks group. The company is investigating how quantum computing, sensing and networking technologies can enhance Boeing products and services for its customers, according to the statement.

“Quantum engineering is starting to make more meaningful progress and users are now asking for ways to experiment and explore the technology’s potential,” said Charlie Bell, senior vice president with AWS’s Utility Computing Services group.

AWS’s assumption going forward is quantum computing will be a cloud-first technology, which will be the way AWS will provide its users with their first quantum experience via Amazon Braket and the Quantum Solutions Lab.

Corporate and third-party developers can create their own customized algorithms with Braket, which gives them the option of executing either low-level quantum circuits or fully-managed hybrid algorithms. This makes it easier to choose between software simulators and whatever quantum hardware they select.

The AWS Center for Quantum Computing is based at Caltech, which has long invested in both experimental and theoretical quantum science and technology.

Go to Original Article
Author: