Tag Archives: learning

At HR Technology Conference, Walmart says virtual reality works

LAS VEGAS — Learning technology appears to be heading for a major upgrade. Walmart is using virtual reality, or VR, to train its employees, and many other companies may soon do the same.

VR adoption is part of a larger tech shift in employee learning. For example, companies such as Wendy’s are using simulation or gamification to help employees learn about food preparation.

Deploying VR technology is expensive, with cost estimates ranging from tens of thousands of dollars to millions, attendees at the HR Technology Conference learned. But headset prices are declining rapidly, and libraries of VR training tools for dealing with common HR situations — such as how to fire an employee — may make this tool affordable to firms of all sizes.

For Walmart, a payoff of using virtual reality comes from higher job certification test scores. Meanwhile, Wendy’s has been using computer simulations to help employees learn their jobs. It is also adapting its training to the expectations of its workers, and its efforts have led to a turnover reduction. Based on presentations and interviews at the HR Technology Conference, users deploying these technologies are enthusiastic about them.

Walmart employees experience VR’s 3D

“It truly becomes an experience,” said Andy Trainor, senior director of Walmart Academies, in an interview about the impact of VR and augmented reality on training. It’s unlike a typical classroom lesson. “Employees actually feel like they experience it,” he said.

Walmart has adopted virtual reality for its training program.
Walmart’s training and virtual reality team, from left to right: Brock McKeel, senior director of digital operations at Walmart and Andy Trainor, senior director of Walmart Academies.

Walmart employees go to “academies” for training, testing and certification on certain processes, such as taking care of the store’s produce section, interacting with customers or preparing for Black Friday. As one person in a class wears the VR headset or goggles, what that person sees and experiences displays on a monitor for the class to follow.

Walmart has been using VR in training from startup STRIVR for just over a year. In classes using VR, Trainor said the company is seeing an increase in test scores as high as 15% over traditional methods of instruction. Trainor said his team members are convinced VR, with its ability to create 3D simulations, is here to stay as a training tool. 

“Life isn’t 2D,” said Brock McKeel, senior director of digital operations at Walmart. For problems ranging from customer service issues to emergency weather planning, “we want our associates to be the best prepared that we can get them to be.”

Walmart has also created a simulation-type game that helps employees understand store management. The company plans to soon release its simulation as an app for anyone to experience, Trainor said.

The old ways of training are broken

The need to do things differently in learning was a theme at the HR Technology Conference.

Life isn’t 2D.
Brock McKeelsenior director of digital operations at Walmart

The idea that employees will take time out of their day to watch a training video or read material that may not be connected to their task at hand is not effective, said David Mallon, a vice president and chief analyst at Bersin, Deloitte Consulting, based in Oakland, Calif.

The traditional methods of learning “have fallen apart,” Mallon said. Employees “want to engage with content on their terms, when they need it, where they need it and in ways that make more sense.”

Mallon’s point is something Wendy’s realized about its restaurant workers, who understand technology and have expectations about content, said Coley O’Brien, chief people officer at the restaurant chain. Employees want the content to be quick, they want the ability to swipe, and videos should be 30 seconds or less, he said.

“We really had to think about how we evolve our training approach and our content to really meet their expectations,” said O’Brien, who presented at the conference.

Wendy’s also created simulations that reproduce some of the time pressures faced with certain food-preparation processes. Employees must make choices in simulations, and mistakes are tracked. The company uses Cornerstone OnDemand’s platform.

Restaurants in which employees received a certain level of certification see higher sales of 1% to 2%, increases in customer satisfaction and a turnover reduction as high as 20%, O’Brien said.

Deep Learning Indaba 2018: Strengthening African machine learning – Microsoft Research

deep leaning indaba, participants at conference

Images ©2018 Deep Learning Indaba.

At the 30th conference on Neural Information Processing in 2016, one of the world’s foremost gatherings on machine learning, there was not a single accepted paper from a researcher at an African institution. In fact, for the last decade, the entire African continent has been absent from the contemporary machine learning landscape. The following year, a group of researchers set out to change this, founding a world-class machine learning conference that would strengthen African machine learning – the Deep Learning Indaba.

The first Deep Learning Indaba took place at Wits University in South Africa. The indaba (a Zulu word for a gathering or meeting) was a runaway success, with almost 300 participants representing 22 African countries and 33 African research institutes. It was a week-long event of teaching, sharing and debate around the state of the art in machine learning and artificial intelligence that aimed to be a catalyst for strengthening machine learning in Africa.

indaba group picture

Attendees at Deep Learning Indaba 2017, held at Wits University, South Africa.

Now in its second year, Microsoft is proud to sponsor Deep Learning Indaba 2018, to be held September 9-14 at Stellenbosch University in South Africa.

The conference offers an exciting line-up of talks, hands-on workshops, poster sessions and networking/mentoring events. Once again it has attracted a star-studded guest speaker list – Google Brain lead and Tensorflow co-creator Jeff Dean; DeepMind lead Nando de Freitas; and AlphaGo lead, David Silver. Microsoft is flying in top researchers as well; Katja Hofmann will speak about reinforcement learning and Project Malmo (check out her recent podcast episode). Konstantina Palla will present on generative models and healthcare. And Timnit Gebru will talk about fairness and ethics in AI.

The missing continent

The motivation behind this conference really resonated with me. When I heard about it, I knew I wanted to contribute to the 2018 Indaba, and I was excited that Microsoft was already signed-up as a headline sponsor, and had our own Danielle Belgrave on the advisory board.

African Map - Indaba 2017 attendance

African countries represented at the 2017 Deep Learning Indaba.

Dr Tempest van Schaik, Software Engineer, AI & Data Science

Dr. Tempest van Schaik, Software Engineer, AI & Data Science

I graduated from University of the Witwatersrand (“Wits”) in Johannesburg, South Africa, with a degree in biomedical engineering, and a degree in electrical engineering, not unlike some of the conference organizers. In 2010, I came to the United Kingdom to pursue my PhD at Imperial College London and stayed on to work in the UK, joining Microsoft in 2017 as a software engineer in machine learning.

In my eight years working in the UK in the tech community, I have seldom come across African scientists, engineers and researchers sharing their work on the international stage. During my PhD studies, I was acutely aware of the Randlord monuments flanking my department’s building, despite the absence of any South Africans inside the department. At scientific conferences in Asia, Europe and the USA, I scanned the schedule for African institutions but seldom found them. Fellow Africans that I do find are usually working abroad. I have come to learn that Africa, a continent bigger than the USA, China, India, and Europe put together, has little visible global participation in science and technology. The reasons are numerous, with affordability being just one factor. I have felt the disappointment of trying to get a Tanzanian panelist to a tech conference in the USA. We realized that even if we could raise sufficient funds for his participation, the money would have achieved so much more in his home country that he couldn’t justify spending it on a conference.

Of all tech areas, perhaps it is artificial intelligence in particular that needs African participation. Countries such as China and the UK are gearing-up for the next industrial revolution, creating plans for re-retraining and increasing digital skills. Those who are left behind could face disruption due to AI and automation and might not be able to benefit from the fruits of AI. Another reason to increase African participation in AI is to reduce algorithmic bias that can arise when a narrow section of society develops technology.

A quote from the Indaba 2017 report perhaps says it best: “The solutions of contemporary AI and machine learning have been developed for the most part in the developed-world. As Africans, we continue to be receivers of the current advances in machine learning. To address the challenges facing our societies and countries, Africans must be owners, shapers and contributors of the advances in machine learning and artificial intelligence. “

Attendees at Deep Learning Indaba 2017

Attendees at Deep Learning Indaba 2017

Diversity

One of the goals of the conference is to increase diversity in the field. To quote the organizers, “It is critical for Africans, and women and black people in particular, to be appropriately represented in the advances that are to be made.” The make-up of the Indaba in its first two years is already impressive and leads by example to show how to organize a diverse and inclusive conference. From the Code of Conduct to the organizing committee, the advisory board, the speakers and attendees, you see a group of brilliant and diverse people in every sense.

Women in Machine Learning session

The 2018 Women in Machine Learning lineup.

The 2018 Women in Machine Learning lineup.

The Indaba’s quest for diversity aligns with another passion of mine, that of increasing women’s participation in STEM. Since my days of being the lonely woman in electrical engineering lectures, things have been improving. There seems to be more awareness today about attracting and retaining women in STEM, by improving workplace culture. However, there’s still a long way to go, and in the UK where I work, only 11% of the engineering workforce is female according to a 2017 survey. I have found great support and encouragement from women-in-tech communities and events such as PyLadies/RLadies London and AI Club For Gender Minorities, and saw the Indaba as an opportunity to pay it forward and link up with like-minded women globally. So, I’m very pleased to say that on the evening of September 10 at the Indaba, Microsoft is hosting a Women in Machine Learning event.

Indaba – a gathering.

Indaba – a gathering.

The aim of our evening is to encourage, support and unite women in machine learning. Our panelists each will describe her personal career journey and her experiences as a woman in machine learning. As there will be a high number of students in attendance, our panel also highlights diverse career paths, from academia to industrial research, to applied machine learning, to start-ups. Our panel consists of Sarah Brown (Brown University, USA), Konstantina Palla (Microsoft Research, UK), Muthoni Wanyoike (InstaDeep, Kenya), Kathleen Siminyu (Africa’s Talking, Kenya) and myself from Microsoft Commercial Software Engineering (UK). We look forward to seeing you there!

AI bias and data stewardship are the next ethical concerns for infosec

When it comes to artificial intelligence and machine learning, there is a growing understanding that rather than constantly striving for more data, data scientists should be striving for better data when creating AI models.

Laura Norén, director of research at Obsidian Security, spoke about data science ethics at Black Hat USA 2018, and discussed the potential pitfalls of not having quality data, including AI bias learned from the people training the model.

Norén also looked forward to the data science ethics questions that have yet to be asked around what should happen to a person’s data after they die.

Editor’s note: This is part two of our talk with Norén and it has been edited for length and clarity.

What do you think about how companies go about AI and machine learning right now?
 
Laura Norén: I think some of them are getting smarter. At a very large scale, it’s not noise, but you get a lot of data that you don’t really need to store forever. And frankly it costs money to store data. It costs money to have lots and lots and lots of variable features in your model. If you get a more robust model and you’re aware of where your signal is coming from, you may also decide not to store particular kinds of data because it’s actually inefficient at some point.
 
For instance, astronomers have this problem. They’ve been building telescopes that are generating so much data, it cripples the system. They’ve had seven years of planning just to figure out which data to keep, because they can’t keep it all.

There’s a myth out there that in order to develop really great machine learning systems you need to have everything, especially at the outset, when you don’t really know what the predictive features are going to be. It’s nontrivial to do the math and to use the existing data and tests and simulations to figure out what you really need to store and what you don’t need to capture in the first place. It’s part of the hoarding mythology that somehow we need all of the data all of the time for all time for every person.

How does data science ethics relate to issues of AI bias caused by the data that’s fed in?
 
Norén: That is such a great, great question. I absolutely know that it’s going to be important. We’re aware of that, we’re watching for it, we’re monitoring for it so we can test for bias in this case against Russians. Because it’s cybersecurity, that’s a bias we might have. You can test for that kind of thing. And so we’re building tests for those kinds of predictable biases we might have.

I wish I had a great story of how we discovered that we’re biased against Russians or North Koreans or something like that. But I don’t have that yet because it would just be wrong to kind of run into some of the great stories that I’m sure we’re going to run into soon enough.
 
How do you identify what could be an AI bias that you need to worry about when first building the system?

 
Norén: When you have low data or your models are kind of all over the place because it’s the very beginning, you might be able to use social science to help you look for early biases. All of the data that we’re feeding into these systems are generated by humans and humans are inherently biased, that’s how we’ve evolved. That turns out to be really strong, evolutionarily speaking, and then not so great in advanced evolution.
 
You can test for things that you think might have a known bias, which then it helps to know your history. Like I said, in cybersecurity you might worry about being biased specifically against particular regions. So you may have a higher false-positive rate for Russians or for Russian language content or Chinese language content, or something like that. You could specifically test for those because you went in knowing that you might have a bias. It’s a little bit more technical and difficult to unearth biases that you were not expecting. We’re using technical solutions and data social science to try to help surface those.

I think social science has been kind of the sleeper hit in data science. It turns out it really helps if you know your domain really well. In our case, that’s social science because we’re dealing with humans. In other cases, it might help to be a really good biologist if you’re starting to do genomics at a predictive level. In general, the strongest data scientists we see are people who have both very high technical skills in the data science vertical but also deep knowledge of their domain.
 
It sounds like a lot of the potential mitigations for AI bias and data science issues boil down to being more proactive rather than reactive. In that spirit, what is an issue that you think will become a bigger topic of discussion in the next five years?
 
Norén: I do actually think it’s going to be very interesting just how people feel about what happens to their data as more and more companies have more and more data about people forever and their data are going to outlive them. There have been some people who are already working on that kind of thing.
 
Say you have a best friend and your best friend dies, but you have all these emails and chats, texts, back-and-forth with your best friend. Someone is developing a chatbot that mimics your best friend by being trained on all those actual conversations you had and will then live on past your best friend. So you can continue to talk with your best friend even though your best friend is dead. That’s an interesting, kind of provocative, almost artistic take on that point.
 
But I think it’s going to be a much bigger topic of conversation to try to understand what it means to have yourself, profiles and data live out beyond the end of your own life and be able to extend to places that you’re not actually in. It will drive decisions about you that you will have no agency over. The dead best friend has no agency over that chatbot.

Indefinite data storage will become much, much more topical in conversation and we’ll also start to see then why the right to be forgotten is an insufficient response to that kind of thing because it assumes that you know where to go as your agency, or that you even have agency at all. You’re dead; you obviously don’t have any agency. Maybe you should, maybe you shouldn’t. That’s an interesting ethical question.

Users are already finding they don’t always have agency over their data even when alive, aren’t they?
 
Norén: Even if you’re alive, if you don’t really know who holds your data, you may have no agency to get rid of it. I can’t call up Equifax and tell them to delete my data. I’m an American, but I don’t have that. I know they’re stewards of it but there’s nothing I could do about that.

We’ll probably favor conversation a lot more in terms of being good guardians of data rather than talking about it in terms of something that we own or don’t own; it will be about stewardship and guardianship.

We’ll probably favor conversation a lot more in terms of being good guardians of data rather than talking about it in terms of something that we own or don’t own; it will be about stewardship and guardianship. That’s a language that I’m borrowing from medical ethics because they’re using that type of language to deal with DNA.
 
Can someone else own your DNA? They’ve decided no. DNA is such an intrinsic part of a person’s identity and a person’s physicality that it can’t be owned in whole by someone else. But that someone else, like a hospital or a research lab, could take guardianship of it.

The language is out there, but we haven’t really seen it move all the way through the field of data science. It’s kind of stuck over in genomics and the Henrietta Lacks story. She was a woman who had ovarian cancer, and she died. But her cells, her cancer cells, were really robust. They worked really well in research settings and they lived on well past Henrietta’s life. Her family was unaware of this. There’s this beautiful book written about what it means to find out that part of your family — this diseased family member that you cared about a lot — is still alive and is still fueling all this research when you didn’t even know anything about it. That’s kind of where that conversation got started, but I see a lot of parallels there between data science and what people think of when they think of DNA.
 
One of the things that’s so different about data science is that we now can actually have a much more complete record of an individual than we have ever been able to have. It’s not just a different iteration on the same kind of thing. You used to be able to have some sort of dossier on you that has your birthdate and your Social Security number, your name and whether you were married. That’s such a small amount of information compared to every single interaction that you’ve had with a piece of software, with another person, with a communication, every medical record, everything that we might know about your DNA. And our knowledge will continue to get deeper and deeper and deeper as science progresses. And we don’t really know what that’s going to do to the concept of individuality and finiteness.
 
I think about these things very deeply. We’re going to see that in terms of, ‘Wow, what does it mean that your data is so complete and it exists in places and times that you could never exist and will never exist?’ That’s why I think that decay by design thing is so important.

Building a data science pipeline: Benefits, cautions

Enterprises are adopting data science pipelines for artificial intelligence, machine learning and plain old statistics. A data science pipeline — a sequence of actions for processing data — will help companies be more competitive in a digital, fast-moving economy. 

Before CIOs take this approach, however, it’s important to consider some of the key differences between data science development workflows and traditional application development workflows.

Data science development pipelines used for building predictive and data science models are inherently experimental and don’t always pan out in the same way as other software development processes, such as Agile and DevOps. Because data science models break and lose accuracy in different ways than traditional IT apps do, a data science pipeline needs to be scrutinized to assure the model reflects what the business is hoping to achieve.

At the recent Rev Data Science Leaders Summit in San Francisco, leading experts explored some of these important distinctions, and elaborated on ways that IT leaders can responsibly implement a data science pipeline. Most significantly, data science development pipelines need accountability, transparency and auditability. In addition, CIOs need to implement mechanisms for addressing the degradation of a model over time, or “model drift.” Having the right teams in place in the data science pipeline is also critical: Data science generalists work best in the early stages, while specialists add value to more mature data science processes.

Data science at Moody’s

Jacob Grotta, managing director, Moody's AnalyticsJacob Grotta

CIOs might want to take note from Moody’s, the financial analytics giant, which was an early pioneer in using predictive modeling to assess the risks of bonds and investment portfolios. Jacob Grotta, managing director at Moody’s Analytics, said the company has streamlined the data science pipeline it uses to create models in order to be able to quickly adapt to changing business and economic conditions.

“As soon as a new model is built, it is at its peak performance, and over time, they get worse,” Grotta said. Declining model performance can have significant impacts. For example, in the finance industry, a model that doesn’t accurately predict mortgage default rates puts a bank in jeopardy. 

Watch out for assumptions

Grotta said it is important to keep in mind that data science models are created by and represent the assumptions of the data scientists behind them. Before the 2008 financial crisis, a firm approached Grotta with a new model for predicting the value of mortgage-backed derivatives, he said. When he asked what would happen if the prices of houses went down, the firm responded that the model predicted the market would be fine. But it didn’t have any data to support this. Mistakes like these cost the economy almost $14 trillion by some estimates.

The expectation among companies often is that someone understands what the model does and its inherent risks. But these unverified assumptions can create blind spots for even the most accurate models. Grotta said it is a good practice to create lines of defense against these sorts of blind spots.

The first line of defense is to encourage the data modelers to be honest about what they do and don’t know and to be clear on the questions they are being asked to solve. “It is not an easy thing for people to do,” Grotta said.

A second line of defense is verification and validation. Model verification involves checking to see that someone implemented the model correctly, and whether mistakes were made while coding it. Model validation, in contrast, is an independent challenge process to help a person developing a model to identify what assumptions went into the data. Ultimately, Grotta said, the only way to know if the modeler’s assumptions are accurate or not is to wait for the future.

A third line of defense is an internal audit or governance process. This involves making the results of these models explainable to front-line business managers. Grotta said he was working with a bank recently that protested its bank managers would not use a model if they didn’t understand what was driving its results. But he said the managers were right to do this. Having a governance process and ensuring information flows up and down the organization is extremely important, Grotta said.

Baking in accountability

Models degrade or “drift” over time, which is part of the reason organizations need to streamline their model development processes. It can take years to craft a new model. “By that time, you might have to go back and rebuild it,” Grotta said. Critical models must be revalidated every year.

To address this challenge, CIOs should think about creating a data science pipeline with an auditable, repeatable and transparent process. This promises to allow organizations to bring the same kind of iterative agility to model development that Agile and DevOps have brought to software development.

Transparent means that upstream and downstream people understand the model drivers. It is repeatable in that someone can repeat the process around creating it. It is auditable in the sense that there is a program in place to think about how to manage the process, take in new information, and get the model through the monitoring process. There are varying levels of this kind of agility today, but Grotta believes it is important for organizations to make it easy to update data science models in order to stay competitive.

How to keep up with model drift

Nick Elprin, CEO and co-founder of Domino Data Lab, a data science platform vendor, agreed that model drift is a problem that must be addressed head on when building a data science development pipeline. In some cases, the drift might be due to changes in the environment, like changing customer preferences or behavior. In other cases, drift could be caused by more adversarial factors. For example, criminals might adopt new strategies for defeating a new fraud detection model.

Nick Elprin, CEO and co-founder, Domino Data LabNick Elprin

In order to keep up with this drift, CIOs need to include a process for monitoring the effectiveness of their data models over time and establishing thresholds for replacing these models when performance degrades.

With traditional software monitoring, the IT service management needs to track metrics related to CPU, network and memory usage. With data science, CIOs need to capture metrics related to accuracy of model results. “Software for [data science] production models needs to look at the output they are getting from those models, and if drift has occurred, that should raise an alarm to retrain it,” Elprin said.

Fashion-forward data science

At Stitch Fix, a personal shopping service, the company’s data science pipeline allows it to sell clothes online at full price. Using data science in various ways allows them to find new ways to add value against deep discount giants like Amazon, said Eric Colson, chief algorithms officer at Stitch Fix.

Eric Colson, chief algorithms officer,  Stitch FixEric Colson

For example, the data science team has used natural language processing to improve its recommendation engines and buy inventory. Stitch Fix also uses genetic algorithms — algorithms that are designed to mimic evolution and iteratively select the best results following a set of randomized changes. These are used to streamline the process for designing clothes, coming up with countless iterations: Fashion designers then vet the designs.

This kind of digital innovation, however, was only possible he said because the company created an efficient data science pipeline. He added that it was also critical that the data science team is considered a top-level department at Stitch Fix and reports directly to the CEO.

Specialists or generalists?

One important consideration for CIOs in constructing the data science development pipeline is whether to recruit data science specialists or generalists. Specialists are good at optimizing one step in a complex data science pipeline. Generalists can execute all the different tasks in a data science pipeline. In the early stages of a data science initiative, generalists can adapt to changes in the workflow more easily, Colson said.

Some of these different tasks include feature engineering, model training, enhance transform and loading (ETL) data, API integration, and application development. It is tempting to staff each of these tasks with specialists to improve individual performance. “This may be true of assembly lines, but with data science, you don’t know what you are building, and you need to iterate,” Colson said. The process of iteration requires fluidity, and if the different roles are staffed with different people, there will be longer wait times when a change is made.

In the beginning at least, companies will benefit more from generalists. But after data science processes are established after a few years, specialists may be more efficient.

Align data science with business

Today a lot of data science models are built in silos that are disconnected from normal business operations, Domino’s Elprin said. To make data science effective, it must be integrated into existing business processes. This comes from aligning data science projects with business initiatives. This might involve things like reducing the cost of fraudulent claims or improving customer engagement.

In less effective organizations, management tends to start with the data the company has collected and wonder what a data science team can do with it. In more effective organizations, data science is driven by business objectives.

“Getting to digital transformation requires top down buy-in to say this is important,” Elprin said. “The most successful organizations find ways to get quick wins to get political capital. Instead of twelve-month projects, quick wins will demonstrate value, and get more concrete engagement.”

Databricks platform additions unify machine learning frameworks

SAN FRANCISCO — Open source machine learning frameworks have multiplied in recent years, as enterprises pursue operational gains through AI. Along the way, the situation has formed a jumble of competing tools, creating a nightmare for development teams tasked with supporting them all.

Databricks, which offers managed versions of the Spark compute platform in the cloud, is making a play for enterprises that are struggling to keep pace with this environment. At Spark + AI Summit 2018, which was hosted by Databricks here this week, the company announced updates to its platform and to Spark that it said will help bring the diverse array of machine learning frameworks under one roof.

Unifying machine learning frameworks

MLflow is a new open source framework on the Databricks platform that integrates with Spark, SciKit-Learn, TensorFlow and other open source machine learning tools. It allows data scientists to package machine learning code into reproducible modules, conduct and compare parallel experiments, and deploy models that are production-ready.

Databricks also introduced a new product on its platform, called Runtime for ML. This is a preconfigured Spark cluster that comes loaded with distributed machine learning frameworks commonly used for deep learning, including Keras, Horovod and TensorFlow, eliminating the integration work data scientists typically have to do when adopting a new tool.

Databricks’ other announcement, a tool called Delta, is aimed at improving data quality for machine learning modeling. Delta sits on top of data lakes, which typically contain large amounts of unstructured data. Data scientists can specify a schema they want their training data to match, and Delta will pull in all the data in the data lake that fits the specified schema, leaving out data that doesn’t fit.

MLflow's tracking user interface
MLflow includes a tracking interface for logging the results of machine learning jobs.

Users want everything under one roof

Each of the new tools is either in a public preview or alpha test stage, so few users have had a chance to get their hands on them. But attendees at the conference were broadly happy about the approach of stitching together disparate frameworks more tightly.

Saman Michael Far, senior vice president of technology at the Financial Industry Regulatory Authority (FINRA) in Washington, D.C., said in a keynote presentation that he brought in the Databricks platform largely because it already supports several query languages, including R, Python and SQL. Integrating these tools more closely with machine learning frameworks will help FINRA use more machine learning in its goal of spotting potentially illegal financial trades.

You have to take a unified approach. Pick technologies that help you unify your data and operations.
John Golesenior director of business analysis and product management at Capital One

“It’s removed a lot of the obstacles that seemed inherent to doing machine learning in a business environment,” Far said.

John Gole, senior director of business analysis and product management at Capital One, based in McLean, Va., said the financial services company has implemented Spark throughout its operational departments, including marketing, accounts management and business reporting. The platform is being used for tasks that range from extract, transform and load jobs to SQL querying for ad hoc analysis and machine learning. It’s this unified nature of Spark that made it attractive, Gole said.

Going forward, he said he expects this kind of unified platform to become even more valuable as enterprises bring more machine learning to the center of their operations.

“You have to take a unified approach,” Gole said. “Pick technologies that help you unify your data and operations.”

Bringing together a range of tools

Engineers at ride-sharing platform Uber have already built integrations similar to what Databricks unveiled at the conference. In a presentation, Atul Gupte, a product manager at Uber, based in San Francisco, described a data science workbench his team created that brings together a range of tools — including Jupyter, R and Python — into a web-based environment that’s powered by Spark on the back end. The platform is used for all the company’s machine learning jobs, like training models to cluster rider pickups in Uber Pool or forecast rider demand so the app can encourage more drivers to get out on the roads.

Gupte said, as the company grew from a startup to a large enterprise, the old way of doing things, where everyone worked in their own silo using their own tool of choice, didn’t scale, which is why it was important to take this more standardized approach to data analysis and machine learning.

“The power is that everyone is now working together,” Gupte said. “You don’t have to keep switching tools. It’s a pretty foundational change in the way teams are working.”

Call center chatbots draw skepticism from leaders

ORLANDO, Fla. — Artificial intelligence chatbot vendors may hype machine learning tools to enhance customer service, but call center leaders aren’t necessarily ready to trust them in the real world.

Part of the reason is call centers are judged by hard-to-achieve performance metrics based on volume, efficiency and customer satisfaction. Once a call center performs successfully against those expectations set by management, it’s hard to convince leaders to entrust call center chatbots with the hard-fought, quality customer relations programs they’ve built with humans.

“I don’t anticipate them having any kind of utility here,” said Jason Baker, senior vice president of operations for Entertainment Benefits Group (EBG), which manages discount tickets and other promotions for 61 million employees at 40,000 client companies. Baker oversees EBG customer service spanning multiple call centers.

“We strive for creating personalized and memorable experiences,” Baker said. “A chatbot — I understand the reason behind it, and, depending upon the type of environment, it might make sense — but in the travel and entertainment industry, you have to have the personalized touch with all interactions.”

Artificial intelligence chatbots were the most-talked-about technology at the ICMI Contact Center Expo, with a mix of trepidation and interest.

Navy Federal Credit Union employs “a few bots” for fielding very basic customer questions, such as balance inquiries, said Georgia Adams, social care supervisor at credit union, based in Vienna, Va.

Her active social media team publishes tens of thousands of posts and comments annually on Twitter, Instagram and Facebook without the help of artificial intelligence chatbots, but “they’re on the horizon.” She stressed that call center chatbots must be transparent — identifying themselves as a bot — and be empowered to transfer customers to human customer service agents quickly to be effective.

“It’s coming, whether you want it or not,” Adams said. “We’re strategizing [and] looking at it. I certainly think they have a lot of value, especially when it comes to things that are basically self-service … but if I’m talking to a bot, I want to know I’m talking to a bot.”

Call center chatbots not gunning for humans’ jobs — yet

Another part of the reason call center personnel might be wary of chatbots — true or not, fair or unfair — is robotic automation will eventually take the humans’ jobs. This idea was dismissed by neutral industry experts such as ICMI founding partner Brad Cleveland, who said alternative customer service channels such as interactive voice response (IVR), email, social media and live chat each caused similar panic in the call center world when they were new. But none of them significantly affected call volumes.

“We hear predictions that artificial intelligence will replace all the jobs out there,” Cleveland said, not just in customer service. “If it does, we’re definitely going to be the last ones standing in customer service. But I don’t think it’s going to happen that way at all.”

Cleveland said he believes artificial intelligence chatbots will likely have utility in the near future, as technology advances and call centers find appropriate uses for them. Machine learning tools that aren’t chatbots, too, will make a difference, he said.

One example on display was an AI tool that can be trained to find — and adapt on the fly — pre-worded answers to common, or complex and time-consuming, customer queries that a human agent can paste into a chat window after a quick edit for sense and perhaps personalization. The idea is they get smarter and more on point over months of use.

But even live chat channels have their limits when they’re run by humans, let alone artificial intelligence chatbots. Frankie Littleford, vice president of customer support at JetBlue, based in Long Island City, N.Y., said during a breakout session here that her agents have to develop a sixth sense about when to stop typing and pick up the phone.

“You know in your gut when to take it out of email or whatever channel that isn’t person-to-person,” Littleford said. “You just continue to make someone angrier when you’re going back and forth — and let’s face it, a lot of people are really brave when they’re not face-to-face or on the phone … If your agents are skilled to speak with those customers, you can allow them to climb their mountain of anger and then de-escalate.”

AI chatbot benefits illustration
ICMI commissioned an artist for select Call Center Expo sessions by whiteboard artist Heather Klar. Here, she illustrated high points from a pro-AI chatbot lecture.

Vendors hold out hope

ICMI attendees weren’t fully buying into the promise of AI chatbots, but undeterred software vendors kept up the full-court press, attempting to sell the benefits of automation and allay fears that chatbots will eventually replace attendees’ jobs.

“We don’t use [AI] to replace human work,” said Mark Bloom, Salesforce Service Cloud senior director of strategy and operations, during his keynote, adding that organizations that attempt to replace people with AI tools haven’t been successful. “We want to augment the work our people are doing and make them more intelligent. That is how we are moving forward.”

You could train a new employee, and they could leave tomorrow. A bot is not going to give up and leave, it’s not going to get sick, and it’s so scalable.
Kaye Chapmancontent and client training manager, Comm100

Setting up call center chatbots will require extensive training in test environments — just like human agents do. Once they’re trained, they require maintenance and updating, but they will solve another vexing problem for call center managers — employee turnover, said Kaye Chapman, content and client training manager for chatbot vendor Comm100, based in Vancouver, B.C.

“You could train a new employee, and they could leave tomorrow,” Chapman said. “A bot is not going to give up and leave, it’s not going to get sick, and it’s so scalable.”

Bob Furniss, vice president at Bluewolf, an IBM subsidiary known for Salesforce automation integrations that’s based in New York, said he believes artificial intelligence chatbots are coming, and AI in general will change both our personal and work lives. He said the potential is there for AI to help ease call center agents’ workload — up to 30% of the simplest customer queries — similar to the promises of IVR and the other channels when they came online in the industry.

Just like all other call center systems, Furniss warned that anything AI-powered will require attention and maintenance to attenuate its actions and keep abreast of changing workflow and updated customer relations strategies.

“This is just like any other technology we have in the contact center,” Furniss said. “You don’t set it and leave it, just like workforce management [applications]. There’s an art and a skill to it.”

DJI and Microsoft partner to bring advanced drone technology to the enterprise

New developer tools for Windows and Azure IoT Edge Services enable real-time AI and machine learning for drones

REDMOND, Wash. — May 7, 2018 — DJI, the world’s leader in civilian drones and aerial imaging technology, and Microsoft Corp. have announced a strategic partnership to bring advanced AI and machine learning capabilities to DJI drones, helping businesses harness the power of commercial drone technology and edge cloud computing.

Through this partnership, DJI is releasing a software development kit (SDK) for Windows that extends the power of commercial drone technology to the largest enterprise developer community in the world. Using applications written for Windows 10 PCs, DJI drones can be customized and controlled for a wide variety of industrial uses, with full flight control and real-time data transfer capabilities, making drone technology accessible to Windows 10 customers numbering nearly 700 million globally.

DJI logoDJI has also selected Microsoft Azure as its preferred cloud computing partner, taking advantage of Azure’s industry-leading AI and machine learning capabilities to help turn vast quantities of aerial imagery and video data into actionable insights for thousands of businesses across the globe.

“As computing becomes ubiquitous, the intelligent edge is emerging as the next technology frontier,” said Scott Guthrie, executive vice president, Cloud and Enterprise Group, Microsoft. “DJI is the leader in commercial drone technology, and Microsoft Azure is the preferred cloud for commercial businesses. Together, we are bringing unparalleled intelligent cloud and Azure IoT capabilities to devices on the edge, creating the potential to change the game for multiple industries spanning agriculture, public safety, construction and more.”

DJI’s new SDK for Windows empowers developers to build native Windows applications that can remotely control DJI drones including autonomous flight and real-time data streaming. The SDK will also allow the Windows developer community to integrate and control third-party payloads like multispectral sensors, robotic components like custom actuators, and more, exponentially increasing the ways drones can be used in the enterprise.

“DJI is excited to form this unique partnership with Microsoft to bring the power of DJI aerial platforms to the Microsoft developer ecosystem,” said Roger Luo, president at DJI. “Using our new SDK, Windows developers will soon be able to employ drones, AI and machine learning technologies to create intelligent flying robots that will save businesses time and money, and help make drone technology a mainstay in the workplace.”

In addition to the SDK for Windows, Microsoft and DJI are collaborating to develop commercial drone solutions using Azure IoT Edge and AI technologies for customers in key vertical segments such as agriculture, construction and public safety. Windows developers will be able to use DJI drones alongside Azure’s extensive cloud and IoT toolset to build AI solutions that are trained in the cloud and deployed down to drones in the field in real time, allowing businesses to quickly take advantage of learnings at one individual site and rapidly apply them across the organization.

DJI and Microsoft are already working together to advance technology for precision farming with Microsoft’s FarmBeats solution, which aggregates and analyzes data from aerial and ground sensors using AI models running on Azure IoT Edge. With DJI drones, the Microsoft FarmBeats solution can take advantage of advanced sensors to detect heat, light, moisture and more to provide unique visual insights into crops, animals and soil on the farm. Microsoft FarmBeats integrates DJI’s PC Ground Station Pro software and mapping algorithm to create real-time heatmaps on Azure IoT Edge, which enable farmers to quickly identify crop stress and disease, pest infestation, or other issues that may reduce yield.

With this partnership, DJI will have access to the Azure IP Advantage program, which provides industry protection for intellectual property risks in the cloud. For Microsoft, the partnership is an example of the important role IP plays in ensuring a healthy and vibrant technology ecosystem and builds upon existing partnerships in emerging sectors such as connected cars and personal wearables.

Availability

DJI’s SDK for Windows is available as a beta preview to attendees of the Microsoft Build conference today and will be broadly available in fall 2018. For more information on the Windows SDK and DJI’s full suite of developer solutions, visit: developer.dji.com.

About DJI

DJI, the world’s leader in civilian drones and aerial imaging technology, was founded and is run by people with a passion for remote-controlled helicopters and experts in flight-control technology and camera stabilization. The company is dedicated to making aerial photography and filmmaking equipment and platforms more accessible, reliable and easier to use for creators and innovators around the world. DJI’s global operations currently span across the Americas, Europe and Asia, and its revolutionary products and solutions have been chosen by customers in over 100 countries for applications in filmmaking, construction, inspection, emergency response, agriculture, conservation and other industries.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For additional information, please contact:

Michael Oldenburg, DJI Senior Communication Manager, North America – michael.oldenburg@dji.com

Chelsea Pohl, Microsoft Commercial Communications Manager – chelp@microsoft.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

For more information, visit our:

Website: www.dji.com

Online Store: store.dji.com/

Facebook: www.facebook.com/DJI

Instagram: www.instagram.com/DJIGlobal

Twitter: www.twitter.com/DJIGlobal
LinkedIn: www.linkedin.com/company/dji

Subscribe to our YouTube Channel: www.youtube.com/DJI

 

 

The post DJI and Microsoft partner to bring advanced drone technology to the enterprise appeared first on Stories.