Tag Archives: predict

AWS, NFL machine learning partnership looks at player safety

The NFL will use AWS’ AI and machine learning products and services to better simulate and predict player injuries, with the goal of ultimately improving player health and safety.

The new NFL machine learning and AWS partnership, announced during a press event Thursday with AWS CEO Andy Jassy and NFL Commissioner Roger Goodell at AWS re:Invent 2019, will change the game of football, Goodell said.

“It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game,” he said.

The NFL machine learning journey

The partnership builds off Next Gen Stats, an existing NFL and AWS agreement that has helped the NFL capture and process data on its players. That partnership, revealed back in 2017, introduced new sensors on player equipment and the football to capture real-time location, speed and acceleration data.

That data is then fed into AWS data analytics and machine learning tools to provide fans, broadcasters and NFL Clubs with live and on-screen stats and predictions, including expected catch rates and pass completion probabilities.

Taking data from that, as well as from other sources, including video feeds, equipment choice, playing surfaces, player injury information, play type, impact type and environmental factors, the new NFL machine learning and AWS partnership will create a digital twin of players.

AWS CEO Andy Jassy and NFL Commissioner Roger Goodell
AWS CEO Andy Jassy, left, and NFL Commissioner Roger Goodell announced a new AI and machine learning partnership at AWS re:Invent 2019.

The NFL began the project with a collection of different data sets from which to gather information, said Jeff Crandall, chairman of the NFL Engineering Committee, during the press event.

It wasn’t just passing data, but also “the equipment that players were wearing, the frequency of those impacts, the speeds the players were traveling, the angles that they hit one another,” he continued.

Typically used in manufacturing to predict machine outputs and potential breakdowns, a digital twin is essentially a complex virtual replica of a machine or person formed out of a host of real-time and historical data. Using machine learning and predictive analytics, a digital twin can be fed into countless virtual scenarios, enabling engineers and data scientists to see how its real-life counterpart would react.

The new AWS and NFL partnership will create digital athletes, or digital twins of a scalable sampling of players, that can be fed into infinite scenarios without risking the health and safety of real players. Data collected from these scenarios is expected to provide insights into changes to game rules, player equipment and other factors that could make football a safer game.

“For us, what we see the power here is to be able to take the data that we’ve created over the last decade or so” and use it, Goodell said. “I think the possibilities are enormous.”

Partnership’s latest move to enhance safety

It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game.
Roger GoodellCommissioner, NFL

New research in recent years has highlighted the extreme health risks of playing football. In 2017, researchers from the VA Boston Healthcare System and the Boston University School of Medicine published a study in the Journal of the American Medical Association that indicated football players are at a high risk for developing long-term neurological conditions.

The study, which did not include a control group, looked at the brains of high school, college and professional-level football players. Of the 111 NFL-level football players the researchers looked at, 110 of them had some form of degenerative brain disease.

The new partnership is just one of the changes the NFL has made over the last few years in an attempt to make football safer for its players. Other recent efforts include new helmet rules, and a recent $3 million challenge to create safer helmets.

The AWS and NFL partnership “really has a chance to transform player health and safety,” Jassy said.

AWS re:Invent, the annual flagship conference of AWS, was held this week in Las Vegas.

Go to Original Article
Author:

VMworld pushes vSAN HCI to cloud, edge

VMware executives predict the vSAN hyper-converged software platform will grow rapidly into a key building block for the vendor’s strategy to conquer the cloud and other areas outside the data center.

VMware spent a lot of time discussing the roadmap for its vSAN hyper-converged infrastructure (HCI) software roadmap at VMworld 2018 last month. The vSAN news included short-term specifics with the launch of a private beta program for the next version, along with more general overarching plans for the future.

VMware executives made it clear that vSAN HCI will play a big role in its long-term cloud strategy. They painted HCI as a technology spanning from the data center to the cloud to the edge, as it brings storage, compute and other resources together into a single platform.

The vSAN HCI software is built into VMware’s vSphere hypervisor, and is sold as part of integrated appliances such as Dell EMC VxRail and as Ready Node bundles with servers. VMware claims more than 14,000 vSAN customers, and IDC lists it as the revenue leader among HCI software.

VMware opened its private beta program for vSAN 6.7.1 during VMworld, adding file and native cloud storage and data protection features.

VSAN HCI: From DC to cloud to edge

During his opening day keynote at VMworld, VMware CEO Pat Gelsinger called vSAN “the engine that’s just been moving rapidly to take over the entire integration of compute and storage to expand to other areas.”

Where is HCI moving to? Just about everywhere, according to VMware executives. That includes Project Dimension, a planned hardware as a service designed to bring VMware SDDC infrastructure on premises.

“The definition of HCI has been expanding,” said Yanbing Li, VMware senior vice president and general manager of storage and availability. “We started with a simple mission of converging compute and storage by putting both on a software-defined platform running on standard servers. This is where a lot of our customer adoption has happened. But the definition of HCI is expanding up through the stack, across to the cloud and it’s supporting a wide variety of applications.”

VSAN beta: Snapshots, native cloud storage

The vSAN 6.7.1 beta includes policy-based native snapshots for data protection, NFS file services and support for persistent storage for containers. VMware also added the ability for vSAN to manage Amazon Elastic Block Storage (EBS) in AWS, a capacity reclamation feature and a Quickstart guided cluster creation wizard.

If it pans out as we hope, it will be data center as a service.
Chris GreggCIO, Mercy Ships

Lee Caswell, VMware vice president of products for storage and availability, said vSAN can now take point-in-time snapshots across a cluster. The snapshot capability is managed through VMware’s vCenter. There is no native vSAN replication yet, however. Replication still requires vSphere Replication.

Caswell said the file services include a clustered namespace, allowing users to move files to VMware Cloud on AWS and back without requiring separate mount points for each node.

The ability to manage elastic capacity in AWS allows customers to scale storage and compute independently,

“This is our first foray into storage-only scaling,” Caswell said.

The automatic capacity redemption will reclaim unused capacity on expensive solid-state drive storage.

Caswell said there was no timetable for when the features will make it into a general availability version of vSAN.

Mercy Ships was among the customers at VMworld expanding their vSAN HCI adoption. Mercy Ships uses Dell EMC VxRail appliances running vSAN in its Texas data center and is adding VxRail on two hospital ships that bring volunteer medical teams to underdeveloped areas. They include the current Africa Mercy floating hospital and a second ship under construction.

“The data center for us needs to be simple, straightforward, scalable and supportable,” Mercy Ships CIO Chris Gregg said. “That’s the dream we’re seeing through hyper-converged infrastructure. If it pans out as we hope, it will be data center as a service. Then, as an IT department we can focus on things that are really important to the organization. For us, that means serving more patients.”

Emerging technologies to fuel collaboration industry growth

The future of any industry is not always certain, and it can be difficult to predict. However, some trends in the unified communications and collaboration industry indicate 2018 will be a strong year of growth.

Over the next two years, 80% of companies intend to adopt UCC tools, according to a survey published by market research firm Ovum. More importantly, 78% of the 1,300 global companies surveyed have already set aside budgets to adopt UCC tools — that’s a promising sign.

But what exactly will that growth in the unified communications and collaboration industry look like? What existing trends will continue? And what new trends will emerge?

The continued rise of APIs in the collaboration industry

As more companies emphasize streamlining their workflows, more IT departments will embed communication APIs into their existing applications. Integrating communication APIs is faster, easier and cheaper than a full internal development, which can take months. Additionally, deploying commercial software, which requires companies to run their own global infrastructure, can be burdensome.

In 2017, 25% of companies used APIs to embed UC features, according to a report from Vidyo, a video conferencing provider based in Hackensack, N.J. This trend is expected to continue, as half of companies plan to deploy APIs this year, and another 78% plan to integrate APIs for embedded video in the future.

Embedded communication APIs also provide contextual information for workflows. Information out of context does not exactly help organizations, and it provides users with a fragmented experience — even with a project management interface to organize workflows.

In 2018, look for new features to put more contextualized information at workers’ fingertips. For example, a sidebar during a video conference could offer users information, such as certain content to address during the meeting or tasks associated with the active speaker.

The AI party arrives in the collaboration industry

As we push into 2018, keep an eye on the emergence of AI in the unified communications and collaboration industry. Virtual assistants and bots, for example, use AI to enrich the meeting experience.

Imagine sitting through a long conference call when the discussion moves to a topic that interests you. You call out, “Start recording conversation,” and a virtual assistant immediately begins recording. Then, you say, “Send me a transcript of this conversation.” And at the end of the call, the virtual assistant sends you a transcript with an analysis of the conversation that you can replay with action items.

Emerging technology in the contact center

Unified communications apps are revolutionizing business in general. But I predict 2018 will be a banner year for the customer-support industry in particular. Some companies have already integrated click-to-call features into their chatbots, but the quality of those features to date has been subpar.

Companies will move from telephony to instant video calls when connecting customers with agents. Thanks to instant translation and transcription services, the video widget will include real-time subtitles translated into French, English, Spanish or whatever language the customer needs to understand the service agent.

The agent experience will also improve. We’ll start to see AI bots on the back end that transcribe conversations and index all the words, so agents can be prompted with special content as the conversation unfolds. Agents could then send information to customers on the spot with a voice command.

Customers and agents will also be able to illustrate what they’re talking about with augmented reality (AR). Imagine you’re on the phone with a Comcast agent, and you can show the agent your router with your iPhone. The agent could send you diagrams of what to do — superimposed onto your router in AR. This process is now possible, thanks to Google and Apple embracing AR toolkits.

These emerging technologies indicate a bright future for the unified communications and collaboration industry. Whatever the next year holds, good luck in your journey.

Stephane Giraudie is CEO and founder of Voxeet, a provider of voice over IP web conferencing software based in San Francisco. 

AI washing muddies the artificial intelligence products market

Analysts predict that by 2020, artificial intelligence technologies will be in almost every new software and service release. And if they’re not actually in them, technology vendors will probably use smoke and mirrors marketing tactics to make users believe they are.

Many tech vendors already shoehorn the AI label into the marketing of every new piece of software they develop, and it’s causing confusion in the market. To muddle things further, major software vendors accuse their competitors of egregious mislabeling, even when the products in question truly do include artificial intelligence technologies.

AI mischaracterization is one of the three major problems in the AI market, as highlighted by Gartner recently. More than 1,000 vendors with applications and platforms describe themselves as artificial intelligence products vendors, or say they employ AI in their products, according to the research firm. It’s a practice Gartner calls “AI washing” — similar to the cloudwashing and greenwashing, which have become prevalent over the years as businesses overexaggerate their association to cloud computing and environmentalism.

AI goes beyond machine learning

When a technology is labelled AI, the vendor must provide information that makes it clear how AI is used as a differentiator and what problems it solves that can’t be solved by other technologies, explained Jim Hare, a research VP at Gartner, who focuses on analytics and data science.

You have to go in with the assumption that it isn’t AI, and the vendor has to prove otherwise.
Jim Hareresearch VP, Gartner

“You have to go in with the assumption that it isn’t AI, and the vendor has to prove otherwise,” Hare said. “It’s like the big data era — where all the vendors say they have big data — but on steroids.”

“What I’m seeing is that anything typically called machine learning is now being labelled AI, when in reality it is weak or narrow AI, and it solves a specific problem,” he said.

IT buyers must hold the vendor accountable for its claims by asking how it defines AI and requesting information about what’s under the hood, Hare said. Customers need to know what makes the product superior to what is already available, with support from customer case studies. Also, Hare urges IT buyers to demand a demonstration of artificial intelligence products using their own data to see them in action solving a business problem they have.

Beyond that, a vendor must share with customers the AI techniques it uses or plans to use in the product and their strategy for keeping up with the quickly changing AI market, Hare said.

The second problem Gartner highlights is that machine learning can address many of the problems businesses need to solve. The buzz around more complicated types of AI, such as deep learning, gets so much hype that businesses overlook simpler approaches.

“Many companies say to me, ‘I need an AI strategy’ and [after hearing their business problem] I say, ‘No you don’t,'” Hare said.

Really, what you need to look for is a solution to a problem you have, and if machine learning does it, great,” Hare said. “If you need deep learning because the problem is too gnarly for classic ML, and you need neural networks — that’s what you look for.”

Don’t use AI when BI works fine

When to use AI versus BI tools was the focus of a spring TDWI Accelerate presentation led by Jana Eggers, CEO of Nara Logics, a Cambridge, Mass., company, that describes its “synaptic intelligence” approach to AI as the combination of neuroscience and computer science.

BI tools use data to provide insights through reporting, visualization and data analysis, and people use that information to answer their questions. Artificial intelligence differs in that it’s capable of essentially coming up with solutions to problems on its own, using data and calculations.

Companies that want to answer a specific question or problem should use business analytics tools. If you don’t know the question to ask, use AI to explore data openly, and be willing to consider the answers from many different directions, she said. This may involve having outside and inside experts comb through the results, perform A/B testing, or even outsource via platforms such as Amazon’s Mechanical Turk.

With an AI project, you know your objectives and what you are trying to do, but you are open to finding new ways to get there, Eggers said.

AI isn’t easy

A third issue plaguing AI is that companies don’t have the skills on staff to evaluate, build and deploy it, according to Gartner. Over 50% of respondents to Gartner’s 2017 AI development strategies survey said the lack of necessary staff skills was the top challenge to AI adoption. That statistic appears to coincide with the data scientist supply and demand problem.

Companies surveyed said they are seeking artificial intelligence products that can improve decision-making and process automation, and most prefer to buy one of the many packaged AI tools rather than build one themselves. Which brings IT buyers back to the first problem of AI washing; it’s difficult to know which artificial intelligence products truly deliver AI capabilities, and which ones are mislabeled.

After determining a prepackaged AI tool provides enough differentiation to be worth the investment, IT buyers must be clear on what is required to manage it, Hare said; what human services are needed to change code and maintain models over the long term? Is it hosted in a cloud service and managed by the vendor, or does the company need knowledgeable staff to keep it running?

“It’s one thing to get it deployed, but who steps in to tweak and train models over time?” he said. “[IBM] Watson, for example, requires a lot of work to stand up and you need to focus the model to solve a specific problem and feed it a lot of data to solve that problem.”

Companies must also understand the data and compute requirements to run the AI tool, he added; GPUs may be required and that could add significant costs to the project. And cutting-edge AI systems require lots and lots of data. Storing that data also adds to the project cost.