Today, artificial intelligence (AI) is being leveraged to do incredible things, from breaking down cultural barriers with smart translators, to helping doctors and biochemists better understand, prevent and treat the world’s deadliest and most confounding diseases. Technology is bridging the gap from imagination to reality faster than ever before, but we have only scratched the surface of what AI can help us accomplish.
At Microsoft, we believe AI has the power to transform our world. By building and leveraging powerful platforms like Microsoft Cognitive Services, we can amplify human ingenuity in ways we never could have imagined: changing how we work, play and live. Our approach to AI isn’t just limited to our products, but also how we participate in the broader community.
Since launching Microsoft Ventures last year, we’ve invested in more than 40 remarkable startups including AI innovators like Agolo, Bonsai, Crowdflower and Livongo. We also formed an AI fund to support startups focused on inclusive growth and a positive impact on society. But we still want to do more. So today, we’re announcing Microsoft Ventures’ next step in helping to drive progress in AI and continuing Microsoft’s commitment to making AI accessible for all: Innovate.AI, a global startup competition.
Through this global competition, we are asking startups creating or leveraging transformative AI technologies to apply for a chance to win $1 million from Microsoft Ventures and participating venture capital (VC) partners and $500,000 in Azure credits. In addition, one standout finalist will be awarded $500,000 in venture funding and $500,000 in Azure credits from Microsoft Ventures as part of the AI for Good Prize, which acknowledges their use of AI to better society. The winners will extend across three regions – North America, Europe and Israel – for a total of four winners and a combined $3.5 million in investment funding. In North America, we are working with Madrona Venture Group; in Europe, Notion Capital; and in Israel, Vertex Ventures Israel.
Up to 10 finalists from each region will participate in a live pitch-off in the spring where their ideas will be evaluated by a panel of experts from Microsoft Ventures and our VC partners. Only one of the 10 finalists per region will be awarded the $1 million prize.
Submissions are open now until the end of the calendar year (Dec. 31, 2017) on the Microsoft Ventures website. Companies working to advance the future of AI and share Microsoft’s mission of making AI more accessible and valuable to everyone are encouraged to apply.
I’m looking forward to what (and who!) we find. Bring us your best!
Tags: artificial intelligence, Microsoft Ventures, Peggy Johnson
Empathy must be embedded in artificial intelligence from the moment it is created to ensure it becomes a positive force in people’s lives, Satya Nadella has said.
Speaking at two events in London that coincided with the release of his book, Hit Refresh, the Chief Executive of Microsoft said new technology will have a “profound impact on our daily lives and do good” but companies must also be mindful of “unintended consequences”.
By ensuring the first wave of AI empowered humanity to achieve more, the technology that follows would be more likely to be a force for good, too, he added.
“I believe in a world that will have an abundance of artificial intelligence, but what will be scarce is real intelligence and human qualities, like empathy,” Nadella said. “I think great innovation comes from the empathy you have for the problems you want to solve for people.
“We need to take accountability for the AI we create. I think a lot about this. With any new technology we, as a society, have to be clear-eyed on both sides of it – the opportunities for this technology to have a profound impact on our daily lives and do good, and at the same time be very mindful of unintended consequences.
“I am focused on all the practical things we can do as creators of AI. I believe it’s a design choice. We want to create AI that empowers humans and make that a core, conscious design decision.”
Nadella pointed out that some of Microsoft’s best-known products have empathy at their core.
“I look back and ask myself: ‘when did we create our best products?’. Beyond having a culture that was a learn-it-all culture rather than a know-it-all culture, one of the things we also had was a deep sense, an intuition, of the unmet and unarticulated needs of our customers. Where does that come from? I believe the best source of it is empathy.”
It’s a key theme in Hit Refresh, which focuses on individual change, the transformation happening inside Microsoft and new technology such as AI. Nadella also covers growing up in Hyderabad, India, his love of cricket, emigrating to the US, “growing up” in Microsoft and becoming only the third CEO in the company’s 42-year history.
During an Intelligence Squared event at the Emmanuel Centre in Westminster and a separate talk at Lord’s Cricket Ground, Nadella spoke about his own “Hit Refresh” moment that made him pause and reassess his life.
“If you had asked me an hour before my son’s birth what I was thinking about, it was: is the nursery ready, is Anu [Nadella’s wife] going to get back to her job as an architect, what will weekends be like? Then everything changed. Zain was born with severe brain damage, which led to Cerebral Palsy,” the 50-year-old said.
“As soon as Anu got out of hospital she was driving him to therapy after therapy; everything that came so naturally to her to give Zain the best chance, didn’t come to me. I was thinking about what happened to me, what happened to my plans, why did this happen. It took me two years or more to internalise that nothing had happened to me, something had happened to Zain, and I had to step up and see the world through his eyes and do my job as his father. I now understand that.
“The process wasn’t linear, it was tumultuous, but it certainly shaped who I am. It gave me a better sense of seeing things through others’ eyes, whether it’s people who work with me or customers. It was perhaps the ‘Hit Refresh’ moment in my life.”
The empathy Nadella discovered “grounded and centred” him. The trait has been a recurring theme throughout his 25 years at the company – even emerging during his Microsoft job interview in 1992. He spent eight hours solving algorithmic and computer science puzzles before being called in for a face-to-face chat.
Nadella was asked a series of questions before facing one that threw him completely off-guard. “What would you do if you see a baby that’s fallen on the street?” the interviewer said. Nadella thought for a couple of minutes before responding. “I would call 911,” he said. The interviewer got up, walked Nadella out of the office and put his arm around him. “If you see a baby on the street, you pick him up and hug him,” he said.
Nadella left the interview convinced he wouldn’t get the job. Just under a quarter of a century later, he was CEO.
During the three years Nadella has spent as head of Microsoft, he has created a “growth mindset” culture, while key areas such as the Azure cloud platform have thrived and the company has pushed the boundaries of technology with the HoloLens mixed-reality headset.
The future, he said, would see huge advances in quantum computing – a revolutionary concept that can unleash never-before-seen levels of computing power. Microsoft recently announced that it has created a programming language that would allow developers to run quantum simulators on their own machines.
“There are a lot of computational problems we [humans] can’t solve. We can’t model an enzyme in natural food production, for example. If you try to solve it using a classical computer, it will take as much time as from the Big Bang to now.
“Think of a maze. With the classic computer you would take a path, find a dead-end, turn back and try another path. Using quantum computing, you can try all the paths at the same time.
“We need a new approach to problems, and we are well on our way to bringing together the maths, the physics and the computer science. The world needs Microsoft to do that.”
That’s mostly what you need to know about what’s hot this year in HR tech. and it’s expected to be the most buzzworthy topic at the 20th annual HR Technology Conference and Exposition, according to HR tech experts.
“You won’t be able to walk 2 or 3 feet without someone talking about this new emerging kind of software,” said John Sumser, principal analyst at the independent consulting firm HRExaminer and a speaker and panelist at the three-day conference in Las Vegas.
Of course, artificial intelligence (AI) in HR isn’t the only hot topic in the business these days.
Employee engagement, learning, benefits automation, and advances in payroll and compensation management technology will likely also be bubbling up, according to Sumser and another conference speaker and panelist, Mollie Lombardi, co-founder and CEO of Aptitude Research Partners, an independent HR tech consulting firm in Boston.
SearchHRSoftware’s publisher, TechTarget, is a media partner of the conference.
The meaning of AI
Sumser noted that there is some vagueness about the term AI in HR technology — with various vendors coming up with different definitions — and, often, a more precise term is more useful.
However, he said, “the essence is tools that reduce the amount of time the customer has to spend in the software and increase the value the customer gets out of the software.”
“That’s really what people mean when they say artificial intelligence,” Sumser said.
So, in other words, artificial intelligence in HR and its close cousin, machine learning, really amount to further automation of HR tech processes across the continuum from core HCM to employee engagement.
Sumser recently finished research for a report on 30 leading companies that have been developing applications for artificial intelligence in HR.
Having data provides advantage for vendors
Five of the vendors Sumser looked at are large-scale HR suite providers: Ultimate Software, Ceridian, Kronos, Cornerstone and Workday. A handful were what Sumser called more niche vendors, but bigger than startups; they included IBM, which has an AI product for HR; Salary.com; SmartRecruiters; and Burning Glass.
“The rest are startups,” Sumser said. “It’s my view that, in this particular arena, the incumbents, the suite players and the long-term players … have a significant advantage over startups. The reason … is they have data.”
All of the suite vendors Sumser mentioned will have a significant presence at the Oct. 10 to 13 HR technology event, as will tech giants with major HR footprints, like Oracle and SAP, and dozens of more specialized startups and smaller companies.
Also among the 421 exhibitors at the 2017 edition of the biggest U.S. HR technology show are social media powerhouses Google and LinkedIn, both of which have in a short time become strong contenders in the talent acquisition market. LinkedIn’s learning division will also be there.
Finding AI use cases will be tricky
As for Lombardi, she concurred with Sumser about the fuzziness that appears to accompany so much talk about artificial intelligence in HR.
Mollie LombardiCEO, Aptitude Research
“AI and machine learning and bots — a lot of people ask me, ‘What’s the difference between those, and what do they really mean?'” Lombardi said. “I think AI, machine learning and bots are sort of where social and mobile were about 10 years ago.”
“There’s going to be a lot of attention paid to it, but finding the use cases is going to be the trick,” she said.
In Lombardi’s view, some of the most interesting applications for AI in HR are in learning, such as intelligently guiding users to content.
Another intriguing use case might be using a chatbot or interrogatory AI tool to guide employers through pay comparison and compensation management decisions, she said.
‘The war for the user experience’
Meanwhile, still another dynamic now playing out in HR tech is what Lombardi called “the war for the user experience.”
“The big firms that have the data and the resources to put together a user experience platform — and a lot of people are talking about this, including the SAPs and the Oracles — they want to be the iPhone to your app,” she said.
But while the big human capital management vendors want customers to use all their modules — from payroll and time and attendance to talent acquisition — many employers are looking for specialized applications to integrate with larger systems.
“I may want to plug in some of these innovative best-of-breed vendors,” Lombardi said. “It will be interesting to see whether people are going to play nice or not.”
A future in which artificial intelligence has become so pervasive that it’s invisible and bots, rather than people, lead our service experiences is the stuff of sci-fi movies. But that future isn’t so far off and, in fact, might not be as scary as we think, says Microsoft’s David Forstrom.
Under the leadership of CEO Satya Nadella, Microsoft wants to break boundaries in AI technology, says Forstrom, who BizTech Managing Editor Ann Longmore-Etheridge chatted with at Microsoft Ignite in Orlando, Fla.
Forstrom, director of AI communications, talked about the evolution of the technology, how Microsoft has begun embedding it into products, such as Office and Cortana, and how it can become a go-to tool for small businesses.
BIZTECH: Why is artificial intelligence important, and what has led to its increased importance in the workplace?
Forstrom: AI addresses the ultimate barrier that we face as humans: information and our limited capacity to absorb and act on it.
What has exploded in the last few years is three things: the cloud, which makes data accessible to the world; two, data — enormous amounts of it; and three, advanced algorithms. We’ve had huge breakthroughs from a research perspective just the last couple years that have allowed us to do things like achieve 96 percent accuracy in image detection or human parity in speech recognition.
We’ve even been able to integrate translation into Skype and now into productivity products like Office and PowerPoint.
BIZTECH: What are some real-world use cases that demonstrate how Cortana can be a valuable tool for small businesses?
Forstrom: We’re busy professionals, and Cortana, because it has access to our information, can see if we have made commitments. So, if I have emailed someone and said, “Let’s revisit this topic in three months,” or “Let’s plan to get together about this in a few weeks,” Cortana will actually remind me and say, “I see you made this commitment. Do you want to schedule a meeting for that?” Or, “It looks like you’re going to need to prepare a document for this meeting.”
Another one is calendaring. Let’s say you’re a small business working with partners and vendors, and rather than going to each individual and saying, “Can we meet next Wednesday? What times do you have open?” you can invoke Cortana to assist with that.
Cortana will look at those people’s calendars and availability, and come back with a meeting time that suits everyone. Or, it may say, “Everyone is available at this time, except for one person. Do you want to proceed with scheduling this block of time?”
Cortana is going to evolve into a personal assistant — a true person assistant. If you think about the practices of a personal assistant in real physical life, it’s extending that into the digital realm. It can be available to you throughout your workday, from home to your small business and back home. It’s still early days, though, for Cortana in business.
BIZTECH: If you’re a small business with limited resources to invest in AI technology, what is your best game plan?
Forstrom: I think businesses right now are recognizing that AI is here to stay. It’s not a flash in the pan. In some shape or form, it will be infused into most of our interactions and our technologies.
Small businesses, first of all, need to look at what problems they’re solving for. A good example of how AI can help is bots, which are something many small businesses are entertaining right now to incorporate into their customer service and engagement.
We did a lot of surveying and research across the Seattle area, talking to a lot of businesses, small and large, about how they could see using bots. In one case, in talking with a restaurant owner, it became clear that the hosts were answering phone calls all the time: people calling wanting to know the hours, or the menu, or about gluten-free options, things like that. If the owner was able to implement a simple rules-based bot — if this question, then this answer — that could completely free up the staff to focus on core things they need to be doing to add more value.
Another area that we see small businesses incorporating AI is in the realm of cognitive services. If a business has a product or solution where they could implement computer vision or natural language understanding, where they are doing a lot of textual interaction with customers — those are fairly cheap. It requires a bit of developer and IT knowledge, but it’s low-level. This is low in cost, compared to complex deep-learning models of AI.
BIZTECH: Microsoft recently bought Hexadite, which underscores the growing role of automation in countering security threats. How do you think that automation will improve cybersecurity, and how will it affect the role of humans?
Forstrom: First and foremost is our principle that AI must augment humanity. Security is certainly one of these spaces we see intersecting quite rapidly with AI, and that’s because it’s all about data and the intelligent insights that you can glean out of that data to make decisions across your security portfolio.
Machines trained through deep learning models will be able to process our knowledge of a potential threat and be able to reach a conclusive decision on actions to be taken. But it’s still humans who are being amplified because it’s humans that are making the decisions at the C suite level or the chief security officer level in terms of, “How do we act on that?”
BIZTECH: Speculatively, in five to 10 years, what is the business environment for AI going to look like?
Forstrom: You can think about it like electricity: It’s all around us, totally pervasive in our lives; it manifests to power our devices and things like that. But we don’t talk or think about it. It just happens. It’s there. And it betters my life and enables me to do more.
That’s the future that we see with AI, especially when you get 10 years or further out. The office of the future will be an AI-infused environment, where we benefit from all of these features and functionality, but we don’t think about them anymore.
There are already AI-driven features in PowerPoint and Microsoft Word that are being used, but users don’t recognize that as AI. It’s just the product.
We’ll really start to see a shift away from some of the fears of today, that AI will cause a sort of dystopian environment, to where people realize it does better their lives and increases their productivity.
Read more articles from BizTech coverage of Microsoft Ignite 2017 here.
Microsoft’s first mission statement envisioned a computer on every desk and in every home, but Bill Gates also had another goal: that computers would someday be able to see, hear, communicate and understand humans and their environment.
More than 25 years and two CEOs later, Microsoft is betting its future on it.
“We truly believe AI is this disruptive force, even though it’s not new,” said Harry Shum, the executive vice president in charge of Microsoft’s AI and Research group, in an interview with GeekWire. “The recent progress is just enormous. We certainly have seen that through our own products and engagement with customers. We also feel we have a very strong point of view about how we take AI to the next step.”
Microsoft CEO Satya Nadella formed the Microsoft AI and Research group one year ago this month as a fourth engineering division at the company, alongside the Office, Windows and Cloud & Enterprise divisions. The move reflects Nadella’s belief in “democratizing AI,” making it available to any person or company, and radically changing the way computers interact with and work on behalf of humans.
One way to measure Microsoft’s AI bet: In its first year of operation, the AI and Research group has grown by 60 percent — from 5,000 people originally to nearly 8,000 people today — through hiring and acquisitions, and by bringing aboard additional teams from other parts of the company.
The creation of Microsoft AI and Research also underscores the intense competition in artificial intelligence. Microsoft is gearing up to compete against the likes of Google, Amazon, Salesforce, Apple, and countless AI startups and research groups, all of them looking to lead the tech industry in this new era of artificial intelligence.
Microsoft wants to show that it’s leading and not falling behind in artificial intelligence, but competitors are also moving aggressively, said Rob Sanfilippo, research vice president at the independent Directions on Microsoft research firm.
“Microsoft has made advances, but so have IBM, Apple, Amazon, Google, Facebook, and others,” he said. “Arguably, in the consumer space, Amazon leads in AI mindshare — more people are acquainted with Alexa than Cortana. Microsoft is looking to avoid missing giant opportunities as it did with mobile and social media, so it is giving its AI strategy a lot of attention and resources.”
Microsoft has advantages and disadvantages in this quest. For example, the company’s Windows Phone business hasn’t come close to competing with iPhone and Android, which means its virtual assistant Cortana is relegated to third-party app status on the most popular smartphone platforms. However, Cortana is native to Windows 10, which is now on more than 500 million devices.
The company also has the advantage of large amounts of data, the raw material of machine learning, through its Bing search engine and, more recently, its $26 billion acquisition of LinkedIn, the largest deal in Microsoft’s history. Products such as Office 365 also provide a unique distribution channel for Microsoft’s AI features.
Then there’s Microsoft Research, founded more than 25 years ago based on Gates’ original vision. It’s stocked with computer scientists and engineers who have spent years pursuing breakthroughs in areas such as deep neural networks, computer vision, machine translation and other fundamental underpinnings of artificial intelligence.
The idea behind AI and Research is to get those researchers working side-by-side with product teams to move artificial intelligence advances — some of them in the works for years or decades — out of the labs and into new and existing products.
People inside the group point to early progress from this approach. In one example, a researcher’s new method for getting computers to recognize human emotion was released as an API for Microsoft cloud customers in a matter of weeks rather than languishing for months or longer after the publication of an academic paper.
“We’ve had this dream for a long time — that systems could be smarter and model the way you think,” said Lili Cheng, a longtime Microsoft researcher and engineer who now leads the company’s AI developer platform as a corporate vice president in the AI and Research group. The company’s leaders believe that aligning the researchers and product groups will allow that to happen faster.
Microsoft changed the vision statement in its annual report this year to read, in part, “We believe a new technology paradigm is emerging that manifests itself through an intelligent cloud and an intelligent edge where computing is more distributed, AI drives insights and acts on the user’s behalf, and user experiences span devices with a user’s available data and information.”
Artificial intelligence is one of the key topics of Nadella’s upcoming book, Hit Refresh. We’ll also be asking Nadella about AI as part of his upcoming appearance at the GeekWire Summit.
Flurry of AI activity
During the past year, Microsoft has introduced new artificial intelligence and machine learning features and services in products including Office, Bing, Azure and programmable AI chips for the company’s data centers. The company has also released standalone AI programs, such as a Seeing AI app that helps visually impaired people better sense and understand the world around them.
After early triumphs and struggles with chatbots, Microsoft has been rolling them out around the world. Shum says the goal is to have a bot in every country with more than 100 million people.
And in a surprise move that reflects Nadella’s pragmatic approach, Microsoft announced a deal with rival Amazon to connect Cortana and Alexa, their voice-activated AI assistants. The news also illustrated their respective strengths: Amazon in consumer technology, and Microsoft in enterprise technology.
Shum hinted that the initial announcement between Microsoft and Cortana might not be the end of the AI collaboration between the Seattle-area tech giants, saying that he sees more opportunities for them to work together. “The world is so big,” he said. “This is the beginning with Alexa and Cortana.”
Microsoft is pushing to incorporate AI more deeply into Office with features such as PowerPoint Designer, which analyzes the content placed on a slide to suggest an optimal layout. Another project called Presentation Translator presents subtitles of the presenter’s live speech, translated into 60 languages, with the ability to create a custom speech model for better accuracy by analyzing the text of the PowerPoint slides.
Another major AI initiative is Microsoft Cognitive Services, which offers APIs for developers to put elements of artificial intelligence into their apps — a key part of the company’s plan to “democratize” artificial intelligence.
Examples include a new feature, released just last week, which allows developers to export a custom data model to work offline with Apple’s iOS 11 Core ML machine learning framework, letting apps use Microsoft’s Custom Vision Service to recognize images even when not online.
“We actually export into Core ML and then you can download it into whatever app you have and then you can actually start using AI models at the device level, not connected to the cloud,” explained Irving Kwong, lead program manager for Microsoft Cognitive Services, showing how to use the technology to recognize pineapples during a demo last week inside a Microsoft office tower in Bellevue, Wash.
More artificial-intelligence announcements are expected from Microsoft next week at the company’s Ignite conference in Orlando, where Nadella is delivering the keynote address on Monday morning.
More challenges ahead
Even with all the activity, Microsoft and the field of AI have a long way to go.
“While this is very exciting, I think people might get confused that most AI problems are solved. That’s definitely not true. I want to caution everyone — we’re still very early in this AI thing,” Shum told the audience at the recent opening of the GIX U.S.-China tech institution. “Computers today can perform specific tasks very well, but when it comes to general tasks, AI cannot compete with a human child.”
Talking with GeekWire in his Microsoft office last week, Shum acknowledged that parts of the company’s artificial intelligence business are still very nascent, as well. “With many of those applications today, we are not overly thinking about making money (or) being profitable too soon,” he said. “We think we’re still early in terms of the right user experience. I think the next couple of years will be very important for us.”
After starting out with teams such as Bing, Cortana, Microsoft’s Information Platform Group, ambient computing and robotics, Microsoft AI and Research has continued to grow. Internal additions include the Azure Machine Learning team, headed by JosephSirosh, corporate vice president of Microsoft’s Cloud AI Platform, which previously was in Microsoft’s Cloud and Enterprise Division, reporting to Scott Guthrie.
Microsoft has also built up its AI capabilities with strategic acquisitions, including the natural language scheduling startup Genee and deep learning startup Maluuba. Deep learning expert Yoshua Bengio, who leads the Montreal Institute for Learning Algorithms and was a Maluuba adviser, is now an adviser to Microsoft and Shum.
This summer, the company formed a new team inside Shum’s organization called Microsoft Research AI, led by longtime artificial intelligence researcher Eric Horvitz, to bring together the company’s top talent in core areas such as machine perception, learning, reasoning and natural language.
“We’ve largely built what I would call wedges of competency — a great speech recognition system, a great vision and captioning system, great object recognition system,” said Horvitz, who is known for projects such as the virtual animated assistant that greets visitors at his door. “We’ve managed to do incredible work with those wedges.”
But now the company is looking to bring all of that together in a quest for the elusive goal of general machine intelligence. “We really want to pursue the understanding of the mysteries of the human intellect,” Horvitz said.
Addressing another major issue facing AI, Microsoft has formed an internal group called “Aether,” which stands for AI and ethics in engineering and research. The group includes a representative from each Microsoft division, reporting to Nadella and the senior leadership team. Horvitz, who is part of the group, said it has been meeting about every two weeks to help the company form standards and best practices for artificial intelligence.
One of the issues the company will grapple with: leveraging the huge amounts of data available through Bing and LinkedIn. Shum acknowledged the limits to how Microsoft can use and present the connections among that data.
“When you think about the Microsoft Graph and Office Graph, now augmented with the LinkedIn Graph, it’s just amazing. It will take some good product sense to bring those things together,” Shum said. “We have to think through the rights to the data, whether it belongs to a company or an individual, and what the user shared and what the company would like to keep. Those things are good product design questions. But we’re very excited about LinkedIn and we are already working very closely.”
If the company can figure out those issues, it could be in unique position, said Samir Diwan, a former Microsoft senior program manager who is now CEO of the startup Polly, which makes chatbots for polls and surveys in Slack, Microsoft Teams and Google Hangouts Chat.
“When we think of research getting closer to product, we frequently think that product teams will be able to take better advantage of the research to deliver more compelling products,” Diwan said. “This might be the first time in history where things are inverted — a strong objective for bringing research closer to product is for research to take advantage of the large amounts of data being generated through product usage.”
Diwan explained, “That’s why I think to a large degree, the true wins at Microsoft of bringing research and product teams closer are years down the line. Over time, that research ideally feeds back into products creating a nice symbiotic relationship between the two.”
As we walked out of his office last week, Shum assessed Microsoft’s prospects in AI. “The new platform is emerging. It will take a little time, but directionally, I think many people see that,” he said. Now, the question is whether Microsoft “can really execute with differentiation,” he said, concluding, “We feel pretty good about our chances here.”
Predictive analytics software combines artificial intelligence, machine learning, data mining and modeling to parse big data resources and create highly accurate and insightful forecasts, but companies need flash technology to support it.
Thanks to its impressive speed, flash technology accelerates predictive analytics software. With flash’s sub-millisecond latency, business, engineering and other verticals can perform more complex analyses in less time than with conventional hard disk drive technology.
“Flash storage is a key technology that enables analysis at larger scales of data in faster time frames,” said Mike Matchett, an analyst with storage industry research firm Taneja Group Inc. in Hopkinton, Mass.
According to Vincent Hsu, IBM’s CTO for storage, there are three basic requirements storage must meet to effectively support analytical workloads: compelling data economics, enterprise resiliency and easy infrastructure integration.
“Put simply, faster response times can yield more business agility and quicker time to value from analytics, and more data analyzed at once means more potential value streams,” Hsu said.
There’s a competitive race today to use predictive analytics software in many forms, including machine learning and deep learning applied to operational optimization.
“By becoming predictive at increasing operational speeds, organizations can not only find marked improvement in existing business processes, but exploit disruptive new approaches to their markets,” Matchett said. “We’ve seen predictive analytics evolve from offline, small data scoring into massive web-scale, big data, real-time decision-making.”
Predictive analytics software is not just about analysis, but gaining the ability to respond — rather than react — to rapidly changing market conditions.
“Since actions based on the results are the whole point, faster, smarter and more relevant results win the day and, as a result, flash wins out,” said Donna Taylor, head of consulting firm Taylor & Associates and former Gartner analyst.
Matchett noted that organizations can add flash technology to almost any modern array in the form of cache or as a fast storage tier.
“We also see some innovation in having storage architectures ‘link up’ server-side flash as a virtual local performance tier of persistence,” he said.
Turning data into insights
The key obstacle many predictive analytics software users face is limited file-access speed.
“While the raw storage capacity of legacy [or] traditional storage has increased dramatically in recent years, the rate at which the data can be accessed and served has remained relatively flat,” said Sam Ruchlewicz, director of digital strategy and data analytics at Warschawski, a marketing communications agency based in Baltimore that uses predictive analytics software to study consumer trends and behaviors.
“As the sheer volume of customer data continues to grow, more predictive analytics applications are moving to flash storage to efficiently and effectively access actionable information,” he said.
Ruchlewicz noted that one of the biggest challenges in his field is making sense of terabytes — or even petabytes — of customer data in real time, then using that insight to deliver a better customer experience at relevant touch points.
“To accomplish that goal, the predictive analytics application [or] algorithm must query the database for the requisite information, process it and provide the result to the next component of your marketing technology stack,” he said. Flash technology is the key to making this process fast and efficient.
As they look to accelerate their predictive analytics capabilities, organizations must carefully examine where a flash technology investment can make the most sense.
“Storage-side flash tends to be shared widely, but is probably the most expensive,” Matchett said. “Server-side flash, such as NVMe, can provide a huge boost to applications that can make use of it locally, but might be quite a large investment to make across a large big data cluster.”
Matchett noted that flash storage prices will continue to fall, even as capacities increase.
“What is interesting is that we also see some possible new tiers of faster persistence coming with ReRAM MRAM and the like,” he said.
For now, many predictive analytics software users rely on a combination of storage media types, including HDDs, tape and flash technology.
“This is nothing new; however, companies looking to squeeze additional value from dense data sets will increasingly adopt flash technology in order to reap the benefits of faster seek and processing speeds,” Taylor said.
The essential attribute most flash customers are looking for, according to Hsu, is data agility — the automated, policy-driven reallocation of data to and from a storage medium without a lot of human intervention or time-consuming, expensive steps.
“It is in this state of data agility where flash really shines and paves the way for artificial intelligence and machine learning,” Hsu said.
Taylor urged predictive analytics software users who are planning a full or partial transition to flash technology to thoroughly research the market. “Otherwise, they risk being at the mercy of a salesman’s skewed sales pitch,” she says.
Ruchlewicz said he would advise any organization considering an infrastructure investment designed to support an analytics initiative to seriously think about using flash storage, noting that most predictive models request data faster than a legacy system can provide it.
“Even if the organization’s data set is within more reasonable bounds, flash is the superior alternative and the system of the future,” he noted.
Hsu concurred. “Data is the most valuable commodity organizations can lay claim to, and any organization that considers speed and insight as a competitive advantage can benefit from flash storage for predictive analytics,” he said.
CAMBRIDGE, Mass. — Despite the appeal of trendy technologies like artificial intelligence, one consultant is encouraging CIOs to go back to business intelligence basics and rethink their key performance indicator methodology.
Mico Yuk, CEO and co-founder of the consultancy BI Brainz Group in Atlanta, said companies are still struggling to make key performance indicators actionable — and not for lack of trying. It turns out the real stumbling block isn’t data, it’s language.
“KPIs are a challenge because of people, not because of measurements. A lot of problems that exist with KPIs are in the way that people interpret them,” she said in an interview at the recent Real Business Intelligence Conference.
Yuk sat down with SearchCIO and talked about how her key performance indicator (KPI) methodology, known as WHW, breaks KPIs into simple components and why her research drove her to consider the psychological impact of KPI names. This Q&A has been edited for brevity and clarity.
You recommend teams should have at least three but no more than five KPIs. What’s the science behind that advice?
Mico Yuk: There’s a book from Franklin Covey called The 4 Disciplines of Execution. It’s a fantastic book. In it, he talks about not having more than three WIGS — widely important goals — per team. He did a study and proved that over a long period of time — a year, three months, six months, the typical KPI timeframe for tracking, monitoring and controlling — human beings can only take action and be effective on three goals. Other research firms have said five to eight KPIs are important. Today, I tell people that most report tracking of KPIs is done on mobile devices. It’s been proven that human beings get over 30,000 ads per day, and half of those exist on their phones. You are constantly competing for people’s attention. With shorter attention spans, you have to be precise, you have to be exact, and when you have your user’s attention, you have to make sure they have exactly what they need to take action or you’ll lose them.
The KPI methodology you ascribe to is called WHW. What is WHW?
Yuk: WHW stands for What, How and When. We took Peter Drucker’s SMART concept. Everybody knows him. He’s the ‘If you can’t see it, then you can’t measure it’ guy. His methodology is called SMART, which stands for specific, measurable, accurate, results-oriented, and time-oriented. He says you have to have all five elements in your KPI in order for it to be useful. We said we’re going look at what Drucker was recommending, extract those elements and turn them into a sentence structure. To do this you take any KPI and ask yourself three questions: What do you need to do with it? That’s the W. By how much? That’s the H. By when? That’s the W. You use those answers to rename your KPI so that it reads like this: The action, the KPI name, the how much, and the when. That is SMART hacked.
Why do you find WHW to be a better KPI methodology?
Yuk: It’s easier. We don’t think one KPI methodology is necessarily better than the other. Using OKRs [Objectives and Key Results] are equally as effective, for instance. But we do find that having just a sentence where someone can plug in words is much faster. Imagine going to a company and saying, ‘You have 20 KPIs. We’re going to transform all of them.’ Some of the methodologies require quite a bit of work to get that done. We find that when we show companies a sentence structure and they are able to just answer, answer, answer and see the transformation, it’s an ‘ah-ha’ moment for them. Not to mention there’s the consumption part of it. Now that you’re specific, it also makes it easier to break that big goal down into smaller goals for people to consume.
You’ve said it’s important to rename KPIs, but the language you use is equally as important. What led you to that conclusion?
Yuk: We are data visualization specialists, but when we started nine years ago we found that [our visualizations] were becoming garbage in, garbage out. We kept saying, ‘This looks great, but it’s not effective. Why?’ We then [looked at] what we were visualizing, and we realized that the KPIs we were visualizing were the problem — not the type of charts, not the color, not the prettiness. That led us to say, ‘We’ve got to look at these KPIs closely and figure out how to make these KPIs smarter.’ That was our shared challenge. That led us into learning more about ‘right-brain wording,’ learning about simplicity, learning about exactly what the perfect KPI looks like after we evaluated as many methodologies as we could find on the market. What we concluded is that it all starts with your KPI name.
What is a “right-brain wording?”
Yuk: If you go online and you look up right brain versus left brain [wording], there are amazing diagrams. They show that your right brain controls your creativity while your left brain is more analytical. Most developers use the left side of their brains — analytics, coding, all that complex stuff. The artists of the world, the creatives who may not be able to understand complex math, they use the right part of their brain. But what you find on the creative side is that there is a cortex activity that happens when you use certain words that [are] visceral. We found that it is one thing to rename your KPIs, but it is another thing to get [the wording right] so that it resonates with people.
Let’s take profit margin as an example, and let’s say that after you use our WHW hack, the revised KPI name is ‘increase profit margin by 25% by 2017.’ If I were to ask you to visualize the word increase, you would probably see an arrow that points up. OK, it’s a visual but not one that you have an emotional connection to — it’s a left-brain, analytical word. But if I ask you to visualize a right-brain word like grow, I guarantee you’ll see a green leaf or plant in your brain. What happens in your brain is, because you’re thinking of a leaf, there’s an association that happens. Most people have a personal experience with the word grow — a memory of some kind. But they don’t have the same relationship with the word increase. Because of the association, users are more likely to remember and take action on that KPI. User adoption of KPIs and taking action is a problem. If you take the time to wordsmith or rename the KPIs so that they’re more right-brain-oriented, you can transform how your users act and react to them.
How can CIOs help make employees feel connected to top-line goals with KPIs?
Yuk: After we finish wordsmithing the KPI’s name, we focus on impact. A CIO in New York told me a long time ago, ‘One of the most important things you need to remember is that everybody has to be tuned into WIFM.’ And I asked, ‘What’s that?’ He said, ‘What’s in it for me?’
The good thing about transforming a KPI into the WHW format — it now has the action, the KPI name, the how much, the by when all in the name. You are now able to take that 25% [profit margin goal] and set deadlines and break it down, not just for the year, but by month, by quarter and even by day. You can break it down to the revenue streams that contribute [to the goal] and see what percentage those revenue streams contribute. That’s where you can individualize expectations and actions.
You tend to find two things. Not only can you individualize expectations, but you can also say, now that you have that individual goal, I can show you how it ties back into the overall goal and how other people are performing compared to you. People innately want to be competitive. They want to be on top — the leaderboard syndrome.
Those two elements are keys to having impact with your KPIs. Again, it’s a bit more psychological, but KPIs aren’t working. So we dug deep into the more cognitive side to try to figure out how to make them resonate with people and the [psychological] rabbit hole goes very deep. Start with the name.
Acronis is turning to artificial intelligence, or AI, to combat ransomware attacks.
The Burlington, Mass., company has built a new version of its Active Protection technology that is integrated into Acronis True Image backup software and uses machine learning to help prevent ransomware viruses from corrupting data.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The software attempts to detect suspicious application behavior before files are corrupted.
“Before, with Active Protection, we detected changes in files. We looked at files, and if multiple operations of encrypting files occurred, an alert was raised,” said Gaidar Magdanurov, vice president and general manager for consumer and online business at Acronis. “Now, we look at what the application is doing. Ransomware can inject code into the application and, on behalf of the application, it encrypts your files.”
Magdanurov said ransomware attacks have begun to affect home users and smaller business customers.
True Image is a consumer backup tool. In the past, Acronis has added Active Protection and other security features in its Acronis Backup application for businesses after first including them in Acronis True Image backup.
The updated Acronis Active Protection collects data and sends it to the Acronis Cloud AI infrastructure for analysis. Machine learning models are created from expected and unexpected behavior, and malicious and legitimate processes. These models become part of the Active Protection so that Acronis True Image can protect a system’s data independent of an internet connection while combating ransomware.
Active Protection can then detect suspicious behavior and check it against the normal process using heuristic analysis and the machine learning models. Any process deemed abnormal prompts the Acronis True Image backup software to send an alert to the administrator.
Real-time monitoring helps verify processes so normal activities continue to run while irregular behavior is stopped. True Image will automatically restore files that were encrypted from the backup.
Magdanurov said the latest version of Active Protection changes the way it stores files. It now stores pieces of files on different parts of the hard drives and can detect if any piece of a file is corrupted, he said.
True Image moves ‘beyond backup’
Phil Goodwinresearch director for storage systems, IDC
Phil Goodwin, research director for storage systems at IDC, said Acronis True Image backup acts more like malware detection with this new technology.
“It’s pretty interesting. They are still different from malware detection, but they are moving in that direction,” he said. “This is beyond backup.”
Goodwin said the use of machine learning technology helps to protect the Acronis True Image backup software and primary data in combating ransomware.
“Previously, a malware event attacked the systems,” he said. “Now, they are trying to propagate in the backup software itself.”
Acronis also updated the Active Protection to combat macOS ransomware attacks. While Windows was hit with WannaCry and Petya ransomware attacks, some of the attacks on Mac systems include KeRanger, Patcher and MacRansom.
“There are new types of ransomware attacking Mac computers,” Magdanurov said. “It’s different in scale because there are less Mac computers. Also, creating ransomware to attack Mac computers is not as easy. With Windows, the attacks use some known security issues.”
The U.S. Defense Intelligence Agency claimed it wanted to reengineer enemy malware to be used as offensive cyberweapons, but experts said this may be less of a practical plan of action and more a signal of intent to shift away from a defensive posture.
Lieutenant General Vincent Stewart, director of the U.S. Defense Intelligence Agency, expressed this interest in offensive cyberweapons while speaking at the U.S. Department of Defense Intelligence Information Systems conference in St. Louis.
“Once we’ve isolated malware, I want to reengineer it and prep to use it against the same adversary who sought to use against us,” Stewart said. “We must disrupt to exist.”
Jonathan Sander, chief technology officer at STEALTHbits Technologies, said calling these comments about offensive cyberweapons, “a plan is reading too much into the press conference.”
“The premise of the comments were that the U.S. has been in a defense only posture, but the NSA leaks of cyberweapons like EternalBlue show that’s far from the truth,” Sander told SearchSecurity. “It’s clear the U.S. has an active and capable red team that’s finding and weaponizing its own cyber assets. Of course the military will also capture, analyze and learn from any weapons used against it. But that’s less news and more something we should all hope they are doing anyway.”
Mounir Hahad, senior director of Cyphort Labs, agreed that Stewart’s comments were “intended to convey intent to be more active than the passive past.”
“There is no advantage gained by the U.S. government in re-using adversary developed malware, the U.S. is plenty capable of developing its own and inflicting whatever damage it wants,” Hahad told SearchSecurity. “Furthermore, the targets may be completely different technologically speaking, so a weapon that works against U.S. targets may be ineffective against a target at a different level of automation.”
The risks of reengineering malware
Jake Williams, founder of consulting firm Rendition InfoSec LLC in Augusta, Ga., said a plan to reengineer offensive cyberweapons would be ineffective.
“This is the general idea of throwing the grenade back at the person who threw it at you, [but] it’s certainly more effective with grenades than with malware,” Williams told SearchSecurity. “Reverse engineering is a much more specialized skill than programming, so the effort required to do this is much higher than simply developing malware in the first place.”
Georgia Weidman, founder and CTO of Shevirah, a penetration testing firm based in Washington, D.C., said our government should be analyzing the offensive cyberweapons samples obtained “to further protect ourselves from future similar attacks,” but noted there are major risks in attempting to repurpose that malware.
Georgia Weidmanfounder and CTO, Shevirah
“There are many instances of exploit code freely available on the internet that purports to attack an enemy but instead attacks the machine that attempts to run the attack, making a victim of the attacker,” Weidman told SearchSecurity. “Sophisticated malware often goes to great lengths to make it difficult for malware analysts to fully understand what it is doing, obfuscating its code to mislead analysts.Or it may behave differently in different environments, attempting to detect when it is being analyzed and changing its behavior accordingly. Simply ripping out the target information in a piece of malware and sending it back out could have devastating unintended consequences if the malware is not fully understood.”
Williams also noted that the practice of reengineering offensive cyberweapons wouldn’t be new, because the CIA “mined malware for capabilities as part of their UMBRAGE program.”
“In most cases, this isn’t a question of patching. Most malware doesn’t use any zero day exploits. The real issue is signatures. We would generally assume that the adversary who is deploying malware has signatures in place to detect their own malware,” Williams said. “Any reengineering effort would have to include some programs to obfuscate the signatures of the malware itself. The problem with this is that the adversary you are throwing the malware back at knows more about the malware than you do and you don’t know what specifically in the malware they are alerting on. A much better plan is to write your own malware from scratch.”
Weidman said the government shouldn’t trust it can control offensive cyberweapons that it didn’t create.
“Malware once released from its cage has no moral compass when attacking intended victims,” Weidman said. “While some malware such as the famous Stuxnet went to great lengths to only attack intended targets, spreading far and wide but only running its destructive payload under specific circumstances, there is very likely to be collateral damage in a malware attack.”
The latest volume of the Microsoft Security Intelligence Report is now available for free download at www.microsoft.com/sir.
This new volume of the report includes threat data from the first quarter of 2017. The report also provides specific threat data for over 100 countries/regions. As mentioned in a recent blog, using the tremendous breadth and depth of signal and intelligence from our various cloud and on-premises solutions deployed globally, we investigate threats and vulnerabilities and regularly publish this report to educate enterprise organizations on the current state of threats and recommended best practices and solutions.
In this 22nd volume, we’ve made two significant changes:
We have organized the data sets into two categories, cloud and endpoint. Today, most enterprises now have hybrid environments and it’s important to provide more holistic visibility.
We are sharing data from a shorter time period, one quarter (January 2017 – March 2017), instead of the typical six months, as we shift our focus to delivering improved and more frequent updates in the future.
The threat landscape is constantly changing. Going forward, we plan to improve how we share the insights, and plan to share data on a more frequent basis – so that you can have more timely visibility into the latest threat insights. We are committed to continuing our investment in researching and sharing the latest security intelligence with you, as we have for over a decade. This shift in our approach is rooted in a principle that guides Microsoft technology investments: to leverage vast data and unique intelligence to help our customers respond to threats faster.
Here are 3 key findings from the report:
As organizations migrate more and more to the cloud, the frequency and sophistication of attacks on consumer and enterprise accounts in the cloud is growing.
There was a 300 percent increase in Microsoft cloud-based user accounts attacked year-over-year (Q1-2016 to Q1-2017).
The number of account sign-ins attempted from malicious IP addresses has increased by 44 percent year over year in Q1-2017.
Cloud services such as Microsoft Azure are perennial targets for attackers seeking to compromise and weaponize virtual machines and other services, and these attacks are taking place across the globe.
Over two-thirds of incoming attacks on Azure services in Q1-2017 came from IP addresses in China and the United States, at 35.1 percent and 32.5 percent, respectively. Korea was third at 3.1 percent, followed by 116 other countries and regions.
Ransomware is affecting different parts of the world to varying degrees.
Ransomware encounter rates are the lowest in Japan (0.012 percent in March 2017), China (0.014 percent), and the United States (0.02 percent).
Ransomware encounter rates are the highest in Europe vs. the rest of the world in Q1-2017.
Multiple European countries, including the Czech Republic (0.17 percent), Italy (0.14 percent), Hungary (0.14 percent), Spain (0.14 percent), Romania (0.13 percent), Croatia (0.13 percent), and Greece (0.12 percent) had much higher ransomware encounter rates than the worldwide average in March 2017.
Download Volume 22 of the Microsoft Security Intelligence Report today to access additional insights: www.microsoft.com/sir.