The post Microsoft Security Intelligence Report volume 23 is now available appeared first on Stories.
The post How will company culture change when artificial intelligence clocks in? appeared first on Stories.
CIOs are starting to rethink the infrastructure stack required to support artificial intelligence technologies, according to experts at the Deep Learning Summit in San Francisco. In the past, enterprise architectures coalesced around efficient technology stacks for business processes supported by mainframes, then by minicomputers, client servers, the internet and now cloud computing. But every level of infrastructure is now up for grabs in the rush to take advantage of AI.
“There were well-defined winners that became the default stack around questions like how to run Oracle and what PDP was used for,” said Ashmeet Sidana, founder and managing partner of Engineering Capital, referring to the Programmed Data Processor, an older model of minicomputer
“Now, for the first time, we are seeing that every layer of that stack is up for grabs, from the CPU and GPU all the way up to which frameworks should be used and where to get data from,” said Sidana, who serves as chief engineer of the venture capital firm, based in Menlo Park, Calif.
The stakes are high for building an AI infrastructure — startups, as well as legacy enterprises, could achieve huge advantages by innovating at every level of this emerging stack for AI, according to speakers at the conference.
But the job won’t be easy for CIOs faced with a fast-evolving field where the vendor pecking order is not yet settled, and their technology decisions will have a dramatic impact on software development. An AI infrastructure requires a new development model that requires a more statistical, rather than deterministic, process. On the vendor front, Google’s TensorFlow technology has emerged as an early winner, but it faces production and customization challenges. Making matters more complicated, CIOs also must decide whether to deploy AI infrastructure on private hardware or use the cloud.
New skills required for AI infrastructure
Traditional application development approaches build deterministic apps with well-defined best practices. But AI involves an inherently statistical process. “There is a discomfort in moving from one realm to the other,” Sidana said. Acknowledging this shift and understanding its ramifications will be critical to bringing the enterprise into the machine learning and AI space, he said.
The biggest ramification is also AI’s dirty little secret: The types of AI that will prove most useful to the enterprise, machine learning and especially deep learning approaches, work great only with great data — both quantity and quality. With algorithms becoming more commoditized, what used to be AI’s major rate-limiting feature — the complexity of developing the software algorithms — is being supplanted by a new hurdle: the complexity of data preparation. “When we have perfect AI algorithms, all the software engineers will become data-preparation engineers,” Sidana said.
Then, there are the all-important platform questions that need to be settled. In theory, CIOs can deploy AI workloads anywhere in the cloud, as cloud providers like Amazon, Google and Microsoft, to name just some, can provide almost bare-metal GPU machines for the most demanding problems. But conference speakers stressed the reality requires CIOs to carefully analyze their needs and objectives before making a decision.
There are a number of deep learning frameworks, but most are focused on academic research. Google’s is perhaps the most mature framework from a production standpoint, but it still has limitations, AI experts noted at the conference.
Eli David, CTO of Deep Instinct, a startup based in Tel Aviv that applies deep learning to cybersecurity, said TensorFlow is a good choice when implementing specific kinds of well-defined workloads like image recognition or speech recognition.
But he cautioned it requires heavy customization for seemingly simple changes like analyzing circular, rather than rectangular, images. “You can do high-level things with the building blocks, but the moment you want to do something a bit different, you cannot do that easily,” David said.
The machine learning platform that Deep Instinct built to improve the detection of cyberthreats by analyzing infrastructure data, for example, was designed to ingest a number of data types that are not well-suited to TensorFlow or existing cloud AI services. As a result, the company built its own deep learning systems on private infrastructure, rather than running it in the cloud.
“I talk to many CIOs that do machine learning in a lab, but have problems in production, because of the inherent inefficiencies in TensorFlow,” David said. He said his team also encountered production issues with implementing deep learning inference algorithms based on TensorFlow on devices with limited memory that require dependencies on external libraries. As more deep learning frameworks are designed for production, rather than just for research environments, he said he expects providers will address these issues.
Separate training from deployment
It is also important for CIOs to make a separation between training and deployment of deep learning algorithms, said Evan Sparks, CEO of San Francisco-based Determined AI, a service for training and deploying deep learning models. The training side often benefits from the latest and fastest GPUs. Deployments are another matter. “I pushed back on the assumption that deep learning training has to happen in the cloud. A lot of people we talk to eventually realize that cloud GPUs are five to 10 times more expensive than buying them on premise,” Sparks said.
Deployment targets can include web services, mobile devices or autonomous cars. The latter may have power, processing efficiency and latency issues that may be critical and might not be able to depend on a network. “I think when you see friction when moving from research to deployment, it is as much about the researchers not designing for deployment as limitations in the tools,” Sparks said.
Artificial intelligence can offer enterprises a significant competitive advantage for some strategic applications. But enterprise IT shops will also require DevOps infrastructure automation to keep up with frequent iterations.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Most enterprise shops won’t host AI apps in-house, but those that do will turn to sophisticated app-level automation techniques to manage IT infrastructure. And any enterprise that wants to inject AI into its apps will require rapid application development and deployment — a process early practitioners call “DevOps on steroids.”
“When you’re developing your models, there’s a rapid iteration process,” said Michael Bishop, CTO of Alpha Vertex, a fintech startup in New York that specializes in AI data analysis of equities markets. “It’s DevOps on steroids because you’re trying to move quickly, and you may have thousands of features you’re trying to factor in and explore.”
DevOps principles of rapid iteration will be crucial to train AI algorithms and to make changes to applications based on the results of AI data analysis at Nationwide Mutual Insurance Co. The company, based in Columbus, Ohio, experiments with IBM’s Watson AI system to predict whether new approaches to the market will help it sell more insurance policies and to analyze data collected from monitoring devices in customers’ cars that help it set insurance rates.
“You’ve got to have APIs and microservices,” said Carmen DeArdo, technology director responsible for Nationwide’s software delivery pipeline. “You’ve got to deploy more frequently to respond to those feedback loops and the market.”
This puts greater pressure on IT ops to provide developers and data scientists with self-service access to an automated infrastructure. Nationwide relies on ChatOps for self-service, as chatbots limit how much developers switch between different interfaces for application development and infrastructure troubleshooting. ChatOps also allows developers to correct application problems before they enter a production environment.
AI apps push the limits of infrastructure automation
Enterprise IT pros who support AI apps quickly find that no human can keep up with the required rapid pace of changes to infrastructure. Moreover, large organizations must deploy many different AI algorithms against their data sets to get a good return on investment, said Michael Dobrovolsky, executive director of the machine learning practice and global development at financial services giant Morgan Stanley in New York.
“The only way to make AI profitable from an enterprise point of view is to do it at scale; we’re talking hundreds of models,” Dobrovolsky said. “They all have different lifecycles and iteration [requirements], so you need a way to deploy it and monitor it all. And that is the biggest challenge right now.”
Houghton Mifflin Harcourt, an educational book and software publisher based in Boston, has laid the groundwork for AI apps with infrastructure automation that pairs Apache Mesos for container orchestration with Apache Aurora, an open source utility that allows applications to automatically request infrastructure resources.
Robert Allendirector of engineering, Houghton Mifflin Harcourt
“Long term, the goal is to put all the workload management in the apps themselves, so that they manage all the scheduling,” said Robert Allen, director of engineering at Houghton Mifflin Harcourt. “I’m more interested in two-level scheduling [than container orchestration], and I believe managing tasks in that way is the future.”
Analysts agreed application-driven infrastructure automation will be ideal to support AI apps.
“The infrastructure framework for this will be more and more automated, and the infrastructure will handle all the data preparation and ingestion, algorithm selection, containerization, and publishing of AI capabilities into different target environments,” said James Kobielus, analyst with Wikibon.
Automated, end-to-end, continuous release cycles are a central focus for vendors, Kobielus said. Tools from companies such as Algorithmia can automate the selection of back-end hardware at the application level, as can services such as Amazon Web Services’ (AWS) SageMaker. Some new infrastructure automation tools also provide governance features such as audit trails on the development of AI algorithms and the decisions they make, which will be crucial for large enterprises.
Early AI adopters favor containers and serverless tech
Until app-based automation becomes more common, companies that work with AI apps will turn to DevOps infrastructure automation based on containers and serverless technologies.
Veritone, which provides AI apps as a service to large customers such as CBS Radio, uses Iron Functions, now the basis for Oracle’s Fn serverless product, to orchestrate containers. The company, based in Costa Mesa, Calif., evaluated Lambda a few years ago, but saw Iron Functions as a more suitable combination of functions as a service and containers. With Iron Functions, containers can process more than one event at a time, and functions can attach to a specific container, rather than exist simply as snippets of code.
“If you have apps like TensorFlow or things that require libraries, like [optical character recognition], where typically you have to use Tesseract and compile C libraries, you can’t put that into functions AWS Lambda has,” said Al Brown, senior vice president of engineering for Veritone. “You need a container that has the whole environment.”
Veritone also prefers this approach to Kubernetes and Mesos, which focus on container orchestration only.
“I’ve used Kubernetes and Mesos, and they’ve provided a lot of the building blocks,” Brown said. “But functions let developers focus on code and standards and scale it without having to worry about [infrastructure].”
Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at email@example.com or follow @PariseauTT on Twitter.
Today’s innovations in technology are opening new doors for retailers. The ability to infuse data and intelligence in all areas of a business has the potential to completely reinvent retail. Here’s a visual look at the top technologies we see enabling this transformation in 2018 and beyond, and where they’ll have the greatest impact.
It’s a major milestone in the push to have search engines such as Bing and intelligent assistants such as Cortana interact with people and provide information in more natural ways, much like people communicate with each other.
A team at Microsoft Research Asia reached the human parity milestone using the Stanford Question Answering Dataset, known among researchers as SQuAD. It’s a machine reading comprehension dataset that is made up of questions about a set of Wikipedia articles.
According to the SQuAD leaderboard, on Jan. 3, Microsoft submitted a model that reached the score of 82.650 on the exact match portion. The human performance on the same set of questions and answers is 82.304. On Jan. 5, researchers with the Chinese e-commerce company Alibaba submitted a score of 82.440, also about the same as a human.
The two companies are currently tied for first place on the SQuAD “leaderboard,” which lists the results of research organizations’ efforts.
Microsoft has made a significant investment in machine reading comprehension as part of its effort to create more technology that people can interact with in simple, intuitive ways. For example, instead of typing in a search query and getting a list of links, Microsoft’s Bing search engine is moving toward efforts to provide people with more plainspoken answers, or with multiple sources of information on a topic that is more complex or controversial.
With machine reading comprehension, researchers say computers also would be able to quickly parse through information found in books and documents and provide people with the information they need most in an easily understandable way.
That would let drivers more easily find the answer they need in a dense car manual, saving time and effort in tense or difficult situations.
These tools also could let doctors, lawyers and other experts more quickly get through the drudgery of things like reading through large documents for specific medical findings or rarified legal precedent. The technology would augment their work and leave them with more time to apply the knowledge to focus on treating patients or formulating legal opinions.
Microsoft is already applying earlier versions of the models that were submitted for the SQuAD dataset leaderboard in its Bing search engine, and the company is working on applying it to more complex problems.
For example, Microsoft is working on ways that a computer can answer not just an original question but also a follow-up. For example, let’s say you asked a system, “What year was the prime minister of Germany born?” You might want it to also understand you were still talking about the same thing when you asked the follow-up question, “What city was she born in?”
It’s also looking at ways that computers can generate natural answers when that requires information from several sentences. For example, if the computer is asked, “Is John Smith a U.S. citizen?,” that information may be based on a paragraph such as, “John Smith was born in Hawaii. That state is in the U.S.”
Ming Zhou, assistant managing director of Microsoft Research Asia, said the SQuAD dataset results are an important milestone, but he noted that, overall, people are still much better than machines at comprehending the complexity and nuance of language.
“Natural language processing is still an area with lots of challenges that we all need to keep investing in and pushing forward,” Zhou said. “This milestone is just a start.”
Allison Linn is a senior writer at Microsoft. Follow her on Twitter.
Newly exposed information showed that the National Security Agency’s Ragtime intelligence gathering program was bigger than was previously thought and may have included processes targeting Americans.
Part of the cache of NSA data left exposed on an unsecured cloud server included files regarding the NSA Ragtime intelligence gathering operations. Before this leak there were four known variants of Ragtime, the most well-known of which was Ragtime P — revealed by Edward Snowdon — authorizing the bulk collection of mobile phone metadata.
The exposed data was found by Chris Vickery, director of cyber risk research at UpGuard, and the new NSA Ragtime information was first reported by ZDNet. According to ZDNet, the leaked data mentioned 11 different variants of the Ragtime program, including Ragtime USP.
This raised concern because “USP” is a term known in the intelligence community to mean “U.S. person.” Targeting U.S. citizens and permanent residents in surveillance programs is illegal, but as in the case of Ragtime P, the NSA has contended it “incidentally” collected information on Americans as part of operations targeting foreign nationals.
As yet, it is unclear what the NSA Ragtime USP program entailed or what the exposed data repository included.
An UpGuard spokesperson said, “Within the repository was data that mentioned the four known Ragtime programs, including Ragtime P, which is known to target Americans, and seven previously unknown programs, including one called USP. We have no evidence beyond this, as far as I know, about Ragtime.”
NSA Ragtime data collection and storage
Rebecca Herold, CEO of Privacy Professor, said it is possible the NSA targeted Americans, but it could be nothing more than the repository of data “incidentally” collected in other operations.
“While the stated purpose is to capture the communications of foreign nationals, the reality is that individuals who engage, or are brought into a conversation by others, are now subject to having their communications also collected, monitored and analyzed,” Herold told SearchSecurity. “So while there are different versions of Ragtime described, and only one or two that describes U.S. citizens’ and residents’ data being involved, the reality is that, based on the descriptions, all of the versions of Ragtime could easily involve U.S. residents’ and citizens’ data. This incidental collection is a result of how the Ragtime versions are publicly described as being engineered.”
Rebecca HeroldCEO, Privacy Professor
The NSA Ragtime P metadata collection was ruled illegal by U.S. courts, but intelligence agencies were allowed to keep the data already acquired. Herold added that “another problem that has never been addressed through these surveillance programs is data retention.” And, Herold said, the recent data exposures by government agencies should lead to revisiting that decision.
“Only entities that have accountability for implementing strong security controls, and establishing effective privacy controls, should be allowed to hold such gigantic repositories that contain such large amounts of sensitive and privacy-impacting data,” Herold said. “This would likely need to be an objective, validated, and non-partisan entity, with ongoing audit oversight. The NSA has not demonstrated any of these accountabilities or capabilities to date, and the majority of government lawmakers have long enabled the NSA’s lack of security and privacy controls.”
Artificial intelligence is the new electricity, said deep learning pioneer Andrew Ng. Just as electricity transformed every major industry a century ago, AI will give the world a major jolt. Eventually.
For now, 99% of the economic value created by AI comes from supervised learning systems, according to Ng. These algorithms require human teachers and tremendous amounts of data to learn. It’s a laborious, but proven process.
AI algorithms, for example, can now recognize images of cats, although they required thousands of labeled images of cats to do so; and they can understand what someone is saying, although leading speech recognition systems needed 50,000 hours of speech — and their transcripts — to do so.
Ng’s point is that data is the competitive differentiator for what AI can do today — not algorithms, which, once trained, can be copied.
“There’s so much open source, word gets out quickly, and it’s not that hard for most organizations to figure out what algorithms organizations are using,” said Ng, an AI thought leader and an adjunct professor of computer science at Stanford University, at the recent EmTech conference in Cambridge, Mass.
His presentation gave attendees a look at the state of the AI era, as well as the four characteristics he believes will be a part of every AI company, which includes a revamp of job descriptions.
Positive feedback loop
So data is vital in today’s AI era, but companies don’t need to be a Google or a Facebook to reap the benefits of AI. All they need is enough data upfront to get a project off the ground, Ng said. That starter data will attract customers who, in turn, will create more data for the product.
“This results in a positive feedback loop. So, after a period of time, you might have enough data yourself to have a defensible business,” said Ng.
A couple of his students at Stanford did just that when they launched Blue River Technology, an ag-tech startup that combines computer vision, robotics and machine learning for field management. The co-founders started with lettuce, collecting images and putting together enough data to get lettuce farmers on board, according to Ng. Today, he speculated, they likely have the biggest data asset of lettuce in the world.
“And this actually makes their business, in my opinion, pretty defensible because even the global giant tech companies, as far as I know, do not have this particular data asset, which makes their business at least challenging for the very large tech companies to enter,” he said.
Turns out, that data asset is actually worth hundreds of millions: John Deere acquired Blue River for $300 million in September.
“Data accumulation is one example of how I think corporate strategy is changing in the AI era, and in the deep learning era,” he said.
Four characteristics of an AI company
While it’s too soon to tell what successful AI companies will look like, Ng suggested another corporate disruptor might provide some insight: the internet.
One of the lessons Ng learned with the rise of the internet was that companies need more than a website to be an internet company. The same, he argued, holds true for AI companies.
“If you take a traditional tech company and add a bunch of deep learning or machine learning or neural networks to it, that does not make it an AI company,” he said.
Internet companies are architected to take advantage of internet capabilities, such as A/B testing, short cycle times to ship products, and decision-making that’s pushed down to the engineer and product level, according to Ng.
AI companies will need to be architected to do the same in relation to AI. What A/B testing’s equivalent will be for AI companies is still unknown, but Ng shared four thoughts on characteristics he expects AI companies will share.
- Strategic data acquisition. This is a complex process, requiring companies to play what Ng called multiyear chess games, acquiring important data from one resource that’s monetized elsewhere. “When I decide to launch a product, one of the criteria I use is, can we plan a path for data acquisition that results in a defensible business?” Ng said.
- Unified data warehouse. This likely comes as no surprise to CIOs, who have been advocates of the centralized data warehouse for years. But for AI companies that need to combine data from multiple sources, data silos — and the bureaucracy that comes with them — can be an AI project killer. Companies should get to work on this now, as “this is often a multiyear exercise for companies to implement,” Ng said.
- New job descriptions. AI products like chatbots can’t be sketched out the way apps can, and so product managers will have to communicate differently with engineers. Ng, for one, is training product managers to give product specifications.
- Centralized AI team. AI talent is scarce, so companies should consider building a single AI team that can then support business units across the organization. “We’ve seen this pattern before with the rise of mobile,” Ng said. “Maybe around 2011, none of us could hire enough mobile engineers.” Once the talent numbers caught up with demand, companies embedded mobile talent into individual business units. The same will likely play out in the AI era, Ng said.
Artificial intelligence isn’t just for the law-abiding. Machine learning algorithms are as freely available to cybercriminals and state-sponsored actors as they are to financial institutions, retailers and insurance companies.
“When we look especially at terrorist groups who are exploiting social media, [and] when we look at state-sponsored efforts to influence and manipulate, they’re using really powerful algorithms that are at everyone’s disposal,” said Yasmin Green, director of research and development at Jigsaw, a technology incubator launched by Google to try to solve geopolitical problems.
Criminals need not develop new algorithms or new AI, Green said at the recent EmTech conference in Cambridge, Mass. They can and are exploiting what is already out there to manipulate public opinion.
The good news about weaponized AI? The tools to combat these nefarious efforts are also advancing. One promising lead, according to Green, is bad actors don’t exhibit the same kinds of online behavior that typical users do. And security experts are hoping to exploit the behavioral “tells” they’re seeing — with the help of machines, of course.
Variations on weaponized AI
Cybercriminals and internet trolls are adept at using AI to simulate human behavior and trick systems or peddle propaganda. The online test used to tell humans from machines, CAPTCHA, is continuously bombarded by bad guys trying to trick it.
In an effort to stay ahead of cybercriminals, CAPTCHA, which stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart, has had to evolve, creating some unanticipated consequences, according to Shuman Ghosemajumder, CTO at Shape Security in Mountain View, Calif. Recent data from Google shows that humans solve CAPTCHAs just 33% of the time. That’s compared to state-of-the-art machine learning optical character recognition technology that has a solve rate of 99.8%.
“This is doing exactly the opposite of what CAPTCHA was originally intended to do,” Ghosemajumder said. “And that has now been weaponized.”
He said advances in computer vision technology have led to weaponized AI services such as Death By CAPTCHA, an API plug-in that promises to solve 1,000 CAPTCHAs for $1.39. “And there are, of course, discounts for gold members of the service.”
A more aggressive attack is credential stuffing, where cybercriminals use stolen usernames and passwords from third-party sources to gain access to accounts.
Sony was the victim of a credential-stuffing attack in 2011. Cybercriminals culled a list of 15 million credentials stolen from other sites and then tested if they worked on Sony’s login page using a botnet. Today, an outfit by the good-guy-sounding name of Sentry MBA — the MBA stands for Modded By Artists — provides cybercriminals with a user interface and automation technology, making it easy to test the veracity of stolen usernames and passwords and to even bypass security features like CAPTCHAs.
“We see these types of attacks responsible for tremendous amounts of traffic on some of the world’s largest websites,” Ghosemajumder said. In the case of one Fortune 100 company, credential-stuffing attacks made up more than 90% of its login activity.
Behavioral tells in weaponized AI
Ghosemajumder’s firm Shape Security is now using AI to detect credential-stuffing efforts. One method is to use machine learning to identify behavioral characteristics that are typical of cybercriminal exploits.
When cybercriminals simulate human interactions, they will, for example, move the mouse from the username field to the password field quickly and efficiently — in an unhumanlike manner. “Human beings are not capable of doing things like moving a mouse in a straight line — no matter how hard they try,” Ghosemajumder said.
Jigsaw’s Green said her team is also looking for “technical markers” that can distinguish truly organic campaigns from coordinated ones. She described state-sponsored actors who peddle propaganda and attempt to spread misinformation through what she called “seed-and-fertilizer campaigns.”
Yasmin Greendirector of research and development, Jigsaw
“The goal of these state-sponsored campaigns is to plant a seed in social conversations and to have the unwitting masses fertilize that seed for it to actually become an organic conversation,” she said.
“There are a few dimensions that we think are promising to look at. One is the temporal dimension,” she said.
Looking across the internet, Jigsaw began to understand that coordinated attacks tend to move together, last longer than organic campaigns and pause as state-sponsored actors waited for instructions on what to do. “You’ll see a little delay before they act,” she said.
Other dimensions include network shape and semantics. State-sponsored actors tend to be more tightly linked together than communities within organic campaigns, and they tend to use “irregularly similar” language in their messaging.
The big question is can behavioral tells — identified by machines and combined with automated detection — be used to effectively identify state-sponsored campaigns? No doubt, time will tell.