Today’s innovations in technology are opening new doors for retailers. The ability to infuse data and intelligence in all areas of a business has the potential to completely reinvent retail. Here’s a visual look at the top technologies we see enabling this transformation in 2018 and beyond, and where they’ll have the greatest impact.
It’s a major milestone in the push to have search engines such as Bing and intelligent assistants such as Cortana interact with people and provide information in more natural ways, much like people communicate with each other.
A team at Microsoft Research Asia reached the human parity milestone using the Stanford Question Answering Dataset, known among researchers as SQuAD. It’s a machine reading comprehension dataset that is made up of questions about a set of Wikipedia articles.
According to the SQuAD leaderboard, on Jan. 3, Microsoft submitted a model that reached the score of 82.650 on the exact match portion. The human performance on the same set of questions and answers is 82.304. On Jan. 5, researchers with the Chinese e-commerce company Alibaba submitted a score of 82.440, also about the same as a human.
The two companies are currently tied for first place on the SQuAD “leaderboard,” which lists the results of research organizations’ efforts.
Microsoft has made a significant investment in machine reading comprehension as part of its effort to create more technology that people can interact with in simple, intuitive ways. For example, instead of typing in a search query and getting a list of links, Microsoft’s Bing search engine is moving toward efforts to provide people with more plainspoken answers, or with multiple sources of information on a topic that is more complex or controversial.
With machine reading comprehension, researchers say computers also would be able to quickly parse through information found in books and documents and provide people with the information they need most in an easily understandable way.
That would let drivers more easily find the answer they need in a dense car manual, saving time and effort in tense or difficult situations.
These tools also could let doctors, lawyers and other experts more quickly get through the drudgery of things like reading through large documents for specific medical findings or rarified legal precedent. The technology would augment their work and leave them with more time to apply the knowledge to focus on treating patients or formulating legal opinions.
Microsoft is already applying earlier versions of the models that were submitted for the SQuAD dataset leaderboard in its Bing search engine, and the company is working on applying it to more complex problems.
For example, Microsoft is working on ways that a computer can answer not just an original question but also a follow-up. For example, let’s say you asked a system, “What year was the prime minister of Germany born?” You might want it to also understand you were still talking about the same thing when you asked the follow-up question, “What city was she born in?”
It’s also looking at ways that computers can generate natural answers when that requires information from several sentences. For example, if the computer is asked, “Is John Smith a U.S. citizen?,” that information may be based on a paragraph such as, “John Smith was born in Hawaii. That state is in the U.S.”
Ming Zhou, assistant managing director of Microsoft Research Asia, said the SQuAD dataset results are an important milestone, but he noted that, overall, people are still much better than machines at comprehending the complexity and nuance of language.
“Natural language processing is still an area with lots of challenges that we all need to keep investing in and pushing forward,” Zhou said. “This milestone is just a start.”
Allison Linn is a senior writer at Microsoft. Follow her on Twitter.
Newly exposed information showed that the National Security Agency’s Ragtime intelligence gathering program was bigger than was previously thought and may have included processes targeting Americans.
Part of the cache of NSA data left exposed on an unsecured cloud server included files regarding the NSA Ragtime intelligence gathering operations. Before this leak there were four known variants of Ragtime, the most well-known of which was Ragtime P — revealed by Edward Snowdon — authorizing the bulk collection of mobile phone metadata.
The exposed data was found by Chris Vickery, director of cyber risk research at UpGuard, and the new NSA Ragtime information was first reported by ZDNet. According to ZDNet, the leaked data mentioned 11 different variants of the Ragtime program, including Ragtime USP.
This raised concern because “USP” is a term known in the intelligence community to mean “U.S. person.” Targeting U.S. citizens and permanent residents in surveillance programs is illegal, but as in the case of Ragtime P, the NSA has contended it “incidentally” collected information on Americans as part of operations targeting foreign nationals.
As yet, it is unclear what the NSA Ragtime USP program entailed or what the exposed data repository included.
An UpGuard spokesperson said, “Within the repository was data that mentioned the four known Ragtime programs, including Ragtime P, which is known to target Americans, and seven previously unknown programs, including one called USP. We have no evidence beyond this, as far as I know, about Ragtime.”
Rebecca Herold, CEO of Privacy Professor, said it is possible the NSA targeted Americans, but it could be nothing more than the repository of data “incidentally” collected in other operations.
“While the stated purpose is to capture the communications of foreign nationals, the reality is that individuals who engage, or are brought into a conversation by others, are now subject to having their communications also collected, monitored and analyzed,” Herold told SearchSecurity. “So while there are different versions of Ragtime described, and only one or two that describes U.S. citizens’ and residents’ data being involved, the reality is that, based on the descriptions, all of the versions of Ragtime could easily involve U.S. residents’ and citizens’ data. This incidental collection is a result of how the Ragtime versions are publicly described as being engineered.”
Rebecca HeroldCEO, Privacy Professor
The NSA Ragtime P metadata collection was ruled illegal by U.S. courts, but intelligence agencies were allowed to keep the data already acquired. Herold added that “another problem that has never been addressed through these surveillance programs is data retention.” And, Herold said, the recent data exposures by government agencies should lead to revisiting that decision.
“Only entities that have accountability for implementing strong security controls, and establishing effective privacy controls, should be allowed to hold such gigantic repositories that contain such large amounts of sensitive and privacy-impacting data,” Herold said. “This would likely need to be an objective, validated, and non-partisan entity, with ongoing audit oversight. The NSA has not demonstrated any of these accountabilities or capabilities to date, and the majority of government lawmakers have long enabled the NSA’s lack of security and privacy controls.”
Artificial intelligence is the new electricity, said deep learning pioneer Andrew Ng. Just as electricity transformed every major industry a century ago, AI will give the world a major jolt. Eventually.
For now, 99% of the economic value created by AI comes from supervised learning systems, according to Ng. These algorithms require human teachers and tremendous amounts of data to learn. It’s a laborious, but proven process.
AI algorithms, for example, can now recognize images of cats, although they required thousands of labeled images of cats to do so; and they can understand what someone is saying, although leading speech recognition systems needed 50,000 hours of speech — and their transcripts — to do so.
Ng’s point is that data is the competitive differentiator for what AI can do today — not algorithms, which, once trained, can be copied.
“There’s so much open source, word gets out quickly, and it’s not that hard for most organizations to figure out what algorithms organizations are using,” said Ng, an AI thought leader and an adjunct professor of computer science at Stanford University, at the recent EmTech conference in Cambridge, Mass.
His presentation gave attendees a look at the state of the AI era, as well as the four characteristics he believes will be a part of every AI company, which includes a revamp of job descriptions.
So data is vital in today’s AI era, but companies don’t need to be a Google or a Facebook to reap the benefits of AI. All they need is enough data upfront to get a project off the ground, Ng said. That starter data will attract customers who, in turn, will create more data for the product.
“This results in a positive feedback loop. So, after a period of time, you might have enough data yourself to have a defensible business,” said Ng.
A couple of his students at Stanford did just that when they launched Blue River Technology, an ag-tech startup that combines computer vision, robotics and machine learning for field management. The co-founders started with lettuce, collecting images and putting together enough data to get lettuce farmers on board, according to Ng. Today, he speculated, they likely have the biggest data asset of lettuce in the world.
“And this actually makes their business, in my opinion, pretty defensible because even the global giant tech companies, as far as I know, do not have this particular data asset, which makes their business at least challenging for the very large tech companies to enter,” he said.
Turns out, that data asset is actually worth hundreds of millions: John Deere acquired Blue River for $300 million in September.
“Data accumulation is one example of how I think corporate strategy is changing in the AI era, and in the deep learning era,” he said.
While it’s too soon to tell what successful AI companies will look like, Ng suggested another corporate disruptor might provide some insight: the internet.
One of the lessons Ng learned with the rise of the internet was that companies need more than a website to be an internet company. The same, he argued, holds true for AI companies.
“If you take a traditional tech company and add a bunch of deep learning or machine learning or neural networks to it, that does not make it an AI company,” he said.
Internet companies are architected to take advantage of internet capabilities, such as A/B testing, short cycle times to ship products, and decision-making that’s pushed down to the engineer and product level, according to Ng.
AI companies will need to be architected to do the same in relation to AI. What A/B testing’s equivalent will be for AI companies is still unknown, but Ng shared four thoughts on characteristics he expects AI companies will share.
Artificial intelligence isn’t just for the law-abiding. Machine learning algorithms are as freely available to cybercriminals and state-sponsored actors as they are to financial institutions, retailers and insurance companies.
“When we look especially at terrorist groups who are exploiting social media, [and] when we look at state-sponsored efforts to influence and manipulate, they’re using really powerful algorithms that are at everyone’s disposal,” said Yasmin Green, director of research and development at Jigsaw, a technology incubator launched by Google to try to solve geopolitical problems.
Criminals need not develop new algorithms or new AI, Green said at the recent EmTech conference in Cambridge, Mass. They can and are exploiting what is already out there to manipulate public opinion.
The good news about weaponized AI? The tools to combat these nefarious efforts are also advancing. One promising lead, according to Green, is bad actors don’t exhibit the same kinds of online behavior that typical users do. And security experts are hoping to exploit the behavioral “tells” they’re seeing — with the help of machines, of course.
Cybercriminals and internet trolls are adept at using AI to simulate human behavior and trick systems or peddle propaganda. The online test used to tell humans from machines, CAPTCHA, is continuously bombarded by bad guys trying to trick it.
In an effort to stay ahead of cybercriminals, CAPTCHA, which stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart, has had to evolve, creating some unanticipated consequences, according to Shuman Ghosemajumder, CTO at Shape Security in Mountain View, Calif. Recent data from Google shows that humans solve CAPTCHAs just 33% of the time. That’s compared to state-of-the-art machine learning optical character recognition technology that has a solve rate of 99.8%.
“This is doing exactly the opposite of what CAPTCHA was originally intended to do,” Ghosemajumder said. “And that has now been weaponized.”
He said advances in computer vision technology have led to weaponized AI services such as Death By CAPTCHA, an API plug-in that promises to solve 1,000 CAPTCHAs for $1.39. “And there are, of course, discounts for gold members of the service.”
A more aggressive attack is credential stuffing, where cybercriminals use stolen usernames and passwords from third-party sources to gain access to accounts.
Sony was the victim of a credential-stuffing attack in 2011. Cybercriminals culled a list of 15 million credentials stolen from other sites and then tested if they worked on Sony’s login page using a botnet. Today, an outfit by the good-guy-sounding name of Sentry MBA — the MBA stands for Modded By Artists — provides cybercriminals with a user interface and automation technology, making it easy to test the veracity of stolen usernames and passwords and to even bypass security features like CAPTCHAs.
“We see these types of attacks responsible for tremendous amounts of traffic on some of the world’s largest websites,” Ghosemajumder said. In the case of one Fortune 100 company, credential-stuffing attacks made up more than 90% of its login activity.
Ghosemajumder’s firm Shape Security is now using AI to detect credential-stuffing efforts. One method is to use machine learning to identify behavioral characteristics that are typical of cybercriminal exploits.
When cybercriminals simulate human interactions, they will, for example, move the mouse from the username field to the password field quickly and efficiently — in an unhumanlike manner. “Human beings are not capable of doing things like moving a mouse in a straight line — no matter how hard they try,” Ghosemajumder said.
Jigsaw’s Green said her team is also looking for “technical markers” that can distinguish truly organic campaigns from coordinated ones. She described state-sponsored actors who peddle propaganda and attempt to spread misinformation through what she called “seed-and-fertilizer campaigns.”
Yasmin Greendirector of research and development, Jigsaw
“The goal of these state-sponsored campaigns is to plant a seed in social conversations and to have the unwitting masses fertilize that seed for it to actually become an organic conversation,” she said.
“There are a few dimensions that we think are promising to look at. One is the temporal dimension,” she said.
Looking across the internet, Jigsaw began to understand that coordinated attacks tend to move together, last longer than organic campaigns and pause as state-sponsored actors waited for instructions on what to do. “You’ll see a little delay before they act,” she said.
Other dimensions include network shape and semantics. State-sponsored actors tend to be more tightly linked together than communities within organic campaigns, and they tend to use “irregularly similar” language in their messaging.
The big question is can behavioral tells — identified by machines and combined with automated detection — be used to effectively identify state-sponsored campaigns? No doubt, time will tell.
A fresh wave of artificial intelligence rolling through Microsoft’s language translation technologies is bringing more accurate speech recognition to more of the world’s languages and higher quality machine-powered translations to all 60 languages supported by Microsoft’s translation technologies.
The advances were announced at Microsoft Tech Summit Sydney in Australia on November 16.
“We’ve got a complex machine, and we’re innovating on all fronts,” said Olivier Fontana, the director of product strategy for Microsoft Translator, a platform for text and speech translation services. As the wave spreads, he added, these machine translation tools are allowing more people to grow businesses, build relationships and experience different cultures.
Microsoft’s research labs around the world are also building on top of these technologies to help people learn how to speak new languages, including a language learning application for non-native speakers of Chinese that also was announced at this week’s tech summit.
The new Microsoft Translator advances build on last year’s switch to deep neural network-powered machine translations, which offer more fluent, human-sounding translations than the predecessor technology known as statistical machine translation.
Both methods involve training algorithms using professionally translated documents, so the system can learn how words and phrases in one language are represented in another language. The statistical method, however, is limited to translating a word within the local context of a few surrounding words, which can lead to clunky and stilted translations.
Neural networks are inspired by people’s theories about how the pattern-recognition process works in the brains of multilingual humans, leading to more natural-sounding translations.
Microsoft recently switched 10 more languages to neural network-based models for machine translation, for a total of 21. The neural network-powered translations show between 6 percent and 43 percent improvement in accuracy depending on language pairs, according to an automated evaluation metric for machine translation known as the bilingual evaluation understudy, or BLEU, score.
“Over the last year, we have been rolling out to more languages, we have been making the models more complex and deeper, so we have much better quality,” said Arul Menezes, general manager of the Microsoft AI and Research machine translation team. He added that the neural network-powered translations for Hindi and Chinese, two of the world’s most popular languages, are available by default to all developers using Microsoft’s translation services.
For a machine, the process of translating from one language to the next is broken down into several steps; each step has a stake in the quality of the translation. In the case of translating what a person speaks in one language, the first step is speech recognition, which is the process of converting spoken words into text.
All languages supported by Microsoft speech translation technologies now use a type of AI called long short-term memory for speech recognition, which together with additional data have led to an up to 29 percent increase in quality over deep neural network models for conversational speech.
“When you do speech translation, you first do speech recognition and then you do translation,” explained Menezes. “So, if you have an error in speech recognition, then that effect is going to be amplified at the next step because if you misrecognize a word, then the translation is going to be incomprehensible.”
The second step of machine translation converts the text from one language to the next, which Microsoft does with neural network-based models for 21 languages. The improvement in quality of translations is apparent even when only one of the languages is supported by a neural network-based model due to an approach that translates both languages through English.
Consider, for example, a person who wants to translate from Dutch to Catalan. Dutch is newly supported by neural networks; engineers are still working on the neural network support infrastructure for Catalan. End users will notice an improvement in the Dutch to Catalan translation using this hybrid approach because half of it is better, noted Menezes.
In the final step of speech translation, the translated text is synthesized into voice via text-to-speech synthesis technology. Here, too, speech and language researchers are making advances that produce more accurate and human-sounding synthetic voices. These improvements translate to higher quality experiences across Microsoft’s existing translation services as well as open the door to new language learning features.
For example, if you really want to learn to speak a foreign language, everyone knows that practice is essential. The challenge is to find someone with the time, patience and skill to help you practice pronunciation, vocabulary and grammar.
For people learning Chinese, Microsoft is aiming to fill that void with a new smartphone app that can act as an always available, artificially intelligent language-learning assistant. The free Learn Chinese app is launching soon on Apple’s iOS platform.
The app aims to solve a problem that is familiar to any langue learner who has spent countless hours in crowded classrooms listening to teachers, watching language-learning videos at home or flipping through stacks of flashcards to master vocabulary and grammar — only to feel woefully underprepared for real-world conversations with native speakers.
“You think you know Chinese, but if you meet a Chinese person and you want to speak Chinese, there is no way you can do it if you have not practiced,” explained Yan Xia, a senior development lead at Microsoft Research Asia in Beijing. “Our application addresses this issue by leveraging our speech technology.”
The application is akin to a teacher’s assistant, noted Frank Soong, principal researcher and research manager of the Beijing lab’s speech group, which developed the machine-learning models that power Learn Chinese as well as Xiaoying, a chatbot for learning English that the lab deployed in 2016 on the WeChat platform in China.
“Our application isn’t a replacement for good human teachers,” said Soong. “But it can assist by being available any time an individual has the desire or the time to practice.”
The language learning technology relies on a suite of AI tools such as deep neural networks that have been tuned by Soong’s group to recognize what the language learners are trying to say and evaluate the speakers’ pronunciation, rhythm and tone. They are based on a comparison with models trained on data from native speakers as well as the lab’s state-of-the art text-to-speech synthesis technology.
When individuals use the app, they get feedback in the form of scores, along with highlighted words that need improvement and links to sample audio to hear the proper pronunciation. “The app will work with you as a language learning partner,” said Xia. “It will respond to you and give you feedback based on what you are saying.”
The Learn Chinese application and Microsoft’s core language translation services are powered by machine intelligence running in the cloud. This allows people the flexibility and convenience to access these services anywhere they have an internet connection, such as a bus stop, restaurant or conference center.
For clients with highly sensitive translation needs or who require translation services where internet connections are unavailable, Microsoft is now offering neural network powered translations for its on-premise servers. The development, Fontana noted, is one more example of how “the AI wave is advancing and reaching more and more places and more and more languages.”
John Roach writes about Microsoft research and innovation. Follow him on Twitter.
Tags: AI, Microsoft Translator
As a signals intelligence analyst in the United States Marine Corps, Solaire Sanderson was deployed twice to Afghanistan and had plans to make a lasting career out of her service.
But injuries to both feet, which required surgeries to realign and remove crushed bones, convinced Sanderson that she probably wasn’t going to be able to maintain the rigorous lifestyle that the Marine Corps demanded.
Originally from Palm Coast, Fla., Sanderson was in the Marines for six years, stationed at Camp Pendleton, Calif., from 2010 to 2016.
“I loved being a Marine, and I love everything the Marine Corps stands for,” said Sanderson, who is our latest Geek of the Week.
“One of my responsibilities was to perform cyber threat analysis, which I quickly became passionate about. While serving on active duty, I attended the American Military University to pursue a Bachelor of Science degree in Cybersecurity.”
During her recovery from surgery, in the last six months of her time in the Marines, Sanderson enrolled in the Microsoft Software & Systems Academy (MSSA), an intensive 18-week course that provides transitioning service members and veterans with critical career skills required for today’s growing technology industry.
“Upon completion of the course, I went through the interview process and accepted a position as a security analyst in Microsoft’s Cyber Defense Operations Center. Though the military and the corporate worlds are completely different, they have, at least, one thing in common: they both have adversaries. I am happy to work with an incredible team of defenders and responders, who are persistent and determined to defend against technology’s dark side.”
Learn more about this week’s Geek of the Week, Solaire Sanderson:
What do you do, and why do you do it? “I am a security analyst in the Cyber Defense Operations Center at Microsoft. I do it because, as corny as it sounds, I truly believe in Microsoft’s mission statement: to empower every person and every organization on the planet to achieve more. Microsoft has such a far-reaching impact on the world — millions of home PC’s, government agencies, schools systems, etc. all rely and operate on Windows operating systems, the Office suite, and various other Microsoft innovations. As a Marine, I loved the feeling of being a part of something greater than myself. So, it is awesome to have a similar feeling while working at Microsoft.”
Where do you find your inspiration? “Children are constantly told that they can be whatever they want to be when they grow up — and I believe that to be true (within reason, of course). I try to apply the same mentality to my adult life — I can do whatever I want to do or learn whatever I want to learn, if I apply myself. I have always found inspiration in watching the world evolve around me. There are always new technologies, computer languages, cyber threats, etc., and I don’t like the feeling of not knowing. So, when I find a topic that I am not familiar with, I force myself to spend time trying to understand it and become comfortable with it. Through this effort, I have found inspiration from discovering new ways of doing things or by stumbling upon needs I didn’t know existed.”
What’s the one piece of technology you couldn’t live without, and why? “As sad as it is to admit, my smartphone. Over the last decade, smartphones have become the new Swiss army knife — they do a little bit of everything. I can make phone calls, text message, video chat, play games, take photos, make deposits into my bank account, check my emails, browse the internet, make purchases, listen to music, watch television, and the list goes on. It’s incredible.”
What’s your workspace like, and why does it work for you? “My team and I operate in an open workspace, just like many Security Operations Centers (SOCs) at various other companies. Within our SOC, we bring together experts from different security teams across Microsoft to help protect, detect, and respond to threats in real time. I am a huge proponent for an open workspace, as it promotes collaboration, versatility, and brainstorming. I love being able to spin my chair around and bounce ideas off of my co-workers.”
Your best tip or trick for managing everyday work and life. (Help us out, we need it.) “I have found that having a routine is key. Life is full of unexpected surprises, which are much easier to manage when everything else is beating to some sort of rhythm. For example, I make sure to work out at 6:15 every morning, just in case a big security incident breaks out at work and I end up working late. At work, my routine allows me to dedicate time to getting important tasks done while also leaving room for anomalies. I am also a big believer in organization – there should be a place for everything and everything should be in its place. Organization just makes life easier and prettier.”
Mac, Windows or Linux? “I have an affinity for all, but I certainly have my hands in Windows more often … ya know, working for Microsoft and all :)”
Kirk, Picard, or Janeway? “I prefer Aram Mojabai from ‘The Blacklist.’”
Transporter, Time Machine or Cloak of Invisibility? “Time Machine — definitely!”
If someone gave me $1 million to launch a startup, I would … “Buy a piece of land in a remote area of Washington to set up a suite of cabins that each have amazing lake and mountain views, lightning fast wifi, fireplaces, unlimited coffee, and an acre of land between each. Life is so busy and it’s easy to get caught up in the hustle and bustle of everyday life. I would love to create a serene retreat for writers, tech nerds, or anyone else that needs a place to be productive or simply revive themselves. There would be an application process, in effort to maintain a barrier to entry so that loud, obnoxious partiers don’t slip through the cracks. I would like to think that this kind of break from the day-to-day would give people the time they need to recharge before going back to work to create the next big thing.”
I once waited in line for … “I am not a big fan of waiting in line — I always do my research ahead of time and find a way to have things delivered to my doorstep.”
Your role models: “This has always been a difficult question for me, simply because I don’t have a particular person in mind, but rather a type of person — anyone who has sought and found work that they are passionate about, who has shed blood, sweat, and tears to be effective at it. OK, that sounds a little dramatic — but I do believe blood, sweat, and tears come naturally to anyone with a true appetite for success, whatever their personal measure of success is.”
Greatest game in history: “Duck Hunt.”
Best gadget ever: “Raspberry Pi.”
First computer: “Wow, great question. I believe we used the Apple IIe in elementary school, but our first home computer was a HP 712. Look how far we’ve come!”
Current phone: “iPhone 7 Plus.”
Favorite app: “*cough* Pokemon Go *cough*”
Favorite cause: “Wounded Warrior Project.”
Most important technology of 2016: “Blockchain. Although Blockchain wasn’t created in 2016, there was a surge of interest surrounding its capabilities in 2016. Since blockchain (the technology behind bitcoin) is a decentralized and dispersed digital ledger, almost 50 top financial institutions began investigating how blockchain can track their assets, cut costs, and accelerate transactions — all while reducing the risk of fraud. While I believe it will still take many years for blockchain to become fully submerged in our economic, social, and/or political infrastructures, it is taking the world by storm. However, it is prudent to be vigilant when dealing with blockchain, as the legality behind blockchain and its applications remains suspicious in countries like China, India, and Russia. This is a particularly interesting topic for me, as the implementation and spread of blockchain technology ensures job security for me, haha.”
Most important technology of 2018: “Intelligent things. Between AI and machine learning, there’s no telling how cohesive and interactive the technology around us will become. We have self-driving vehicles; smart home devices that help us set alarms, schedule appointments, create shopping lists, and provides us with a vast array of information from the internet; Netflix and Pandora, which save our preferences and predict accurate recommendations based on those preferences; and the list goes on. 2018 is set to deliver even more intelligent technology that will use behavioral algorithms to predictively learn from our behaviors and anticipate our needs. I can’t wait — we are about to be the real-life Jetsons. Alexa is my Rosey the Robot.”
As artificial intelligence becomes ubiquitous in the 21st century, IT ops pros have begun to wonder if and when they’ll be automated out of a job.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
To some extent, it’s already begun to happen.
IT infrastructures are already automated and self-scaling, and self-healing systems are not far off as container orchestration tools develop. A cascade of IT monitoring tools with artificial intelligence and machine learning capabilities has hit the market promising to glue IT infrastructure with automated provisioning, deployment and incident response systems. IT pros can certainly be forgiven for asking if the future of DevOps has room for any humans.
The answer to that question, in these early stages of AIOps, depends on whom you ask. Most IT vendors are jumping wholeheartedly into an AI arms race, to the point where Microsoft and Amazon have declared that AI will be the crux of their future competitive advantage. The tech giants are so serious about the emerging field that they even collaborated on an AI-as-a-service product called Gluon. Google is also focused on AI, from its internal IT processes to its TensorFlow AI services.
The future of DevOps products will be AI-driven. Continuous integration and continuous delivery pipeline tools such as Electric Cloud 8.0 now incorporate machine learning and data analytics features to optimize DevOps workflows, and tools such as ServiceNow’s Agent Intelligence bring machine learning to IT service ticket routing.
Nuno PereiraCTO, IJet International
Bleeding-edge DevOps shops already imagine a world of self-provisioning infrastructure to support application deployments.
“Why should I have to set up the resource requirements for each Kubernetes deployment?” said Cole Calistra, CTO of Kairos AR Inc., a provider of human facial recognition and analytics for developers in Miami. “Let the server figure out those limits based on actual historical data and predict what size the cluster should scale up to, based on what it has learned about its operation over time.”
Many in the IT industry argue this is the only way IT can maintain staggeringly complex infrastructures of containers and virtual networks that support equally complex microservices architectures.
“By the end of 2018, we should see products closing the loop between data feeds from monitoring systems and orchestrators that take action,” said Arvind Soni, VP of product at Netsil, a startup that just emerged from stealth with a tool that automatically maps and monitors Kubernetes infrastructures. “Container environments are so complex that anything else is unsustainable.”
IT ops pros are right to suspect the AI hype portends massive changes to their careers, regardless of whether AI replaces them entirely. And their worst-case scenario, that robots will completely take over their jobs, has some basis in reality.
Companies already have products they claim can reduce the number of human operators needed to manage massive infrastructures. HCL Technologies Ltd, a multinational company based in India, said its ElasticOps product already applies AIOps to help maintain its managed cloud infrastructure service, a 50,000-instance environment, with just 30 engineers. Another managed cloud hosting services company, Datapipe Inc., based in Jersey City, N.J., has incorporated machine learning algorithms and AI-assisted automation into its Trebuchet tool.
“There’s a strong undercurrent of protectionism around operations today in terms of, ‘Don’t automate my job,'” said Patrick McClory, director of automation and DevOps at Datapipe. But that protectionism won’t shore up IT careers against AIOps for long.
“IT operations [is] a target of this, but applications are the thing that adds value to the business — nobody really cares about infrastructure these days,” he said. “Wouldn’t it be cool if we go further up the stack instead of just instrumenting the behavior of these machines, to actually diving into the behavior of the developers working on it?”
If AIOps can realize that vision, humans will be back in the decision seat for a strategic role within companies, rather than caught up in undifferentiated day-to-day maintenance, McClory said.
As with any new technology, the early days of AI have already yielded unintended consequences that send shivers down the spine of a generation raised on movies such as Terminator and The Matrix, which present worst-case scenarios of machine intelligence run amok.
In the real world, Facebook’s R&D staff was forced to pull the plug on an experiment this year that involved generative adversarial networks — computer networks that can negotiate with one another — when AI machines began speaking a language humans couldn’t understand. This development was far from making Skynet a reality, but that didn’t stop media outlets from pointing out that potential.
Early attempts to harness AI for IT management at Google also had real, if less dramatic, unintended consequences, according to Ben Sigelman, who served as senior staff software engineer for the web giant from 2003 to 2012.
Ben SigelmanCEO and co-founder, LightStep
“I saw things that correctly predicted almost every failure at Google and incorrectly predicted five times as many that weren’t [accurate],” said Sigelman, who is now CEO and co-founder of LightStep, a startup that specializes in monitoring cloud-native microservices infrastructures. “It’s incredibly powerful technology and can find signals in really noisy voluminous streams of information, but it needs to be the right signal. You shouldn’t take any sort of action unless you’re almost entirely certain that you’re correct.”
Discerning the right signals entirely depends on the data with which an AIOps system makes its decisions, and some experts argue that IT monitoring data collected to date isn’t good enough to reinforce critical production environments.
“Data doesn’t speak for itself — we don’t have enough or good enough data to generate models that we can really rely on,” said Neil Raden, analyst at Wikibon. “Unattended machine learning algorithms just sift through data, and who knows if that’s really effective.”
Raden’s colleague at Wikibon, James Kobielus, a former IBM AI evangelist that worked with the Watson AI platform, disagreed that AI doesn’t have enough to go on, but acknowledged that human operators need to train AI algorithms on whether statistical correlations are valuable to the business.
But does reliance on human operators to train AI bring the field back to step one? The value of AI, after all, is to analyze much larger amounts of data than humans can handle, and potentially identify patterns humans can’t.
For enterprises, early experiments in automatically generated IT monitoring alerts resulted in a wall of noise, which human operators quickly stemmed by paring down the number of alerts they received.
“The real question is whether we’ve overtuned it and now it’s keeping some of the hidden gems hidden,” said Nuno Pereira, CTO of IJet International, a risk management company in Annapolis, Md.
The company experienced a near-revolt among its ops team last year when IT monitoring data was hooked up to an automated paging system that flooded them with alerts. “That’s one thing that keeps me up at night,” Pereira said. “Are those needles in the ever-growing haystack being silenced?” As a result he’s looking at AIOps tools from AppDynamics, which he already uses for other purposes, as well as competitors such as Moogsoft.
In the current furor around AIOps, as with any marketing buzzword, people latch onto a term but find it difficult to locate the signal in the noise, Pereira said. But as the volume of infrastructure data continues to increase, humans will inevitably need digital assistance, and he can’t stop thinking about the needle in the haystack that may lie in wait.
How far can IT automation go? For now, even the most ardent AI supporters concede that only human-supervised AI is practical for use in IT shops in the next five to eight years.
“Our goal is not to just mature the technology and be confident in it from a statistical perspective, but also to work with the people to be more comfortable interacting with it, and to confirm or correct our assumptions around what should be done,” Datapipe’s McClory said.
Netsil’s Soni foresees a “human augmentation” phase for AI that’s already playing out in self-driving cars.
“We have the AI technology for a completely autonomous car, but would you trust it to drive your kids to school tomorrow?” he said. “Probably not. So what we have right now are augmentation features like blind spot and pedestrian warnings. The problem is not the technology, it’s trust.”
IT shops, which already have trouble finding skilled staff, must contend with a paradox as they look to AI to keep up with the future of DevOps. This technology could bridge the gap between overloaded IT staff and extremely complex modern infrastructures, but skills to develop and train machine learning are in short supply.
Credit bureau Experian, for example, is already deeply invested in AI and machine learning, especially in its R&D department, Experian DataLabs. The IT team at Experian also has AIOps on its radar, and the company has bots that automate some of its finance processes, said Barry Libenson, CIO at Experian. But while it’s eager to expand on that, finding people to train AI systems is much easier said than done.
“We’re constrained by the number of people we have with the expertise to do this stuff because it is so new,” Libenson said. “Those skill sets are considerably more difficult to get, and can be considerably more complex than some of the stuff that’s going on in the DevOps area.”
Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at email@example.com or follow @PariseauTT on Twitter.
Artificial intelligence tools are popping up everywhere these days. Considering their copious amounts of data and nonstop customer interactions, contact centers and sales teams in particular seem especially ripe to integrate AI technologies.
Just this week, one longtime unified communications and contact center vendor, Avaya Inc., and a newer contact center software provider, Talkdesk Inc., introduced AI-focused services. Cisco, too, announced this week an AI-powered voice assistant for Cisco Spark meetings.
Avaya announced A.I.Connect, an initiative with technology partners that looks to speed up the development and application of unified communications and contact center AI technologies for Avaya customers. Through the partnership, Avaya said it wants to create a broad set of AI capabilities for digitally based customer service.
The AI tools will be built on and integrated into Avaya Oceana, an omnichannel customer engagement service, and Avaya Breeze, a developer platform for building communications-enabled apps.
Contact center AI can help companies capture and use real-time customer sentiment to improve customer service. Contact center AI can also help organizations organize vast amounts of data using predictive analytics to deliver information in real time, where it can affect an ongoing customer interaction.
Initially, the A.I.Connect program will focus on five areas:
Seven partners are currently in the A.I.Connect program: Afiniti, Arrow Systems Integration, Cogito, EXP360, Nuance Communications, ScoreData and Sundown AI. More companies are expected to participate in the initiative in the coming months.
In other contact center AI news, Talkdesk has launched a product geared to help inside sales teams make more calls to prospects. The offering, Talkdesk for Sales, features AI tools that contextualize sales calls to provide agents with pertinent customer information.
“Talkdesk for Sales increases the volume of conversations and percentage of conversions by prioritizing who and when to call,” Ken Landoline, principal analyst at London-based Ovum, said in a statement. The service identifies conversational coaching opportunities and provides answers to a prospect’s questions and objections in real time during a conversation.
The product suite includes Talkdesk’s SalesAssist, which uses AI tools and voice analytics to provide sales reps with answers during the course of a call. SalesAssist transcribes conversations in real time, analyzes what is said, finds content from sales playbooks and presents that information to the rep.
Talkdesk for Sales integrates with Salesforce and displays pertinent customer data on one screen. With the cloud service, sales reps can be distributed and handle calls from anywhere. A call-recording feature can help coach reps, and voice analytics tools use AI to identify successful sales calls to help train teams.
According to Talkdesk, a cloud-based contact center software provider based in San Francisco, growth in inside sales teams is outpacing field sales.
In other Talkdesk news, the vendor was recently named a visionary in Gartner’s Magic Quadrant for contact center as a service (CCaaS). The report, which evaluated the North American market, includes unified-communications-as-a-service vendors 8×8, West Corp. and BroadSoft.
Unified communications and contact center services are increasingly melding. Customers and contact center agents are now going beyond telephony to use new tools to communicate with each other, including messaging and video chat.
Both CCaaS products and on-premises contact center infrastructure offer similar capabilities. Organizations could potentially substitute traditional on-premises contact center infrastructure with CCaaS offerings.
The cloud-based contact center market is expected to grow from $5.43 billion in 2016 to $15.67 billion by 2021, according to research firm MarketsandMarkets.