The post Microsoft/Dell enter into transformative agreement with the US Intelligence Community for Microsoft Cloud Services for Government appeared first on Stories.
The post Microsoft President Brad Smith: Using intelligence to advance security from the edge to the cloud appeared first on Stories.
Amid evolving digital threats, an innovative IoT security solution, integrated threat intelligence and advanced protection in Microsoft 365 help simplify cybersecurity for businesses
SAN FRANCISCO — April 16, 2018 — At a news conference on Monday, Microsoft Corp. announced several new intelligent security tools and technologies to help enterprises more easily secure their data and networks against today’s biggest threats as well as address emerging threats aimed at IoT and edge devices. These new solutions build on Microsoft’s longstanding approach to delivering innovation that customers and partners can build upon to strengthen the broader ecosystem against cyberattacks from the cloud to the edge.
“As last year’s devastating cyberattacks demonstrated, security threats are evolving and becoming even more serious,” said Brad Smith, president of Microsoft. “The tech sector’s innovations need to accelerate to outpace security threats. Today’s steps bring important security advances not just to the cloud, but to the billions of new devices that are working on the edge of the world’s computer networks.”
Securing a new generation of connected devices: announcing Azure Sphere
Microsoft is harnessing the power of the intelligent cloud to address emerging threats against a new class of connected devices, those relying on a chip the size of a thumbnail called a microcontroller unit (MCU). MCU-powered devices are already the most populous area of computing with roughly 9 billion new devices every year. They are found in everything from toys and household appliances to industrial equipment — and attackers are starting to target them. To bring security to this next generation of connected devices, Microsoft is introducing Azure Sphere, the industry’s first holistic platform for creating highly secured, connected MCU devices on the intelligent edge. Featuring an entirely new class of MCUs with more than five times the power of legacy MCUs, an OS custom built for IoT security, and a turnkey cloud security service that guards every Azure Sphere device. With Azure Sphere, Microsoft extends the boundaries of the intelligent edge, to power and secure an entirely new category of devices.
“As our homes become more connected, we place significant value on the security of connected devices, so we can focus on continuing to deliver an exceptional customer experience,” said Brian Jones, director of Product Strategy and Marketing at Sub-Zero Group Inc. “Microsoft’s approach with Azure Sphere is unique in that it addresses security holistically at every layer.”
Microsoft 365 Intelligent Security Solutions: Simplifying Security
As security threats become more complex, companies are increasingly finding that the intelligence and threat protection tools they need to remain a step ahead of attackers are in the cloud. Today, Microsoft introduced several new intelligent security features for its Microsoft 365 commercial cloud offering designed to help IT and security professionals simplify how they manage security across their enterprises:
Advanced tools that make it easier to prevent threats before they happen
- To help teams stay prepared and ahead of threats, Microsoft today released Microsoft Secure Score and Attack Simulator. Secure Score makes it easier for organizations to determine which controls to enable to help protect users, data and devices by quickly assessing readiness and providing an overall security benchmark score. It will also let organizations compare their results to those with similar profiles using built-in machine learning. Attack Simulator, a part of Office 365 Threat Intelligence, lets security teams run simulated attacks — including mock ransomware and phishing campaigns — to event-test their employees’ responses and tune configurations accordingly.
Automated threat detection and remediation to free up security operations teams
- With the latest Windows 10 update, now in preview, Windows Defender Advanced Threat Protection (ATP) works across other parts of Microsoft 365 to include threat protection and remediation spanning Office 365, Windows and Azure. Also available today in preview, and with the upcoming Windows 10 update, are new automated investigation and remediation capabilities in Windows Defender ATP, leveraging artificial intelligence and machine learning to quickly detect and respond to threats on endpoints, within seconds, at scale.
- Conditional Access provides real-time risk assessments to help ensure that access to sensitive data is appropriately controlled, without getting in the way of users’ productivity. Microsoft 365 is now adding the device risk level set by Windows Defender ATP to Conditional Access in preview to help ensure that compromised devices can’t access sensitive business data.
Stronger partnerships to give customers more integrated solutions
- The intelligence data used to quickly detect and respond to threats improves as more relevant signals are added. Machine learning tools are only as good as the data they receive. Microsoft’s security products are informed by the trillions of diverse signals feeding into the Microsoft Intelligent Security Graph. Today, Microsoft announced a preview of a new security API for connecting Microsoft Intelligent Security Graph-enabled products as well as intelligence from solutions built by customers and technology partners to greatly enhance the fidelity of intelligence.
Most security tools report an attack from a single limited perspective, offering insight into one piece of a potentially larger threat. By connecting individual tools to the Intelligent Security Graph, security teams get new perspectives and more meaningful patterns of data to speed up threat investigation and remediation. The new API is in early testing with a select group of cybersecurity industry leaders that are collaborating with Microsoft to shape its development. The group, which includes Anomali, Palo Alto Networks and PwC, joined Microsoft today to share their own early exploration of the API and how it may improve each company’s ability to protect their mutual customers.
- Microsoft also is announcing a new Microsoft Intelligent Security Association for security technology partners so they can benefit from, and contribute to, the Intelligent Security Graph and Microsoft security products. Members of the association will be able to create more integrated solutions for customers that provide greater protection and detect attacks more quickly. Palo Alto Networks and Anomali join PwC and other existing partners as founding members of the new association.
Microsoft is partnering with customers through their digital transformation by making it easier for them to help keep assets secure from the cloud to the edge.
More information on Microsoft’s security announcements can be found at the Microsoft Security News site.
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications, (425) 638-7777,
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.
The post Microsoft announces new intelligent security innovations to help businesses manage threats from cloud to edge appeared first on Stories.
The post GeekWire: ‘Microsoft doubles down on artificial intelligence in engineering reorganization’ appeared first on Stories.
The post Microsoft Security Intelligence Report volume 23 is now available appeared first on Stories.
The post How will company culture change when artificial intelligence clocks in? appeared first on Stories.
CIOs are starting to rethink the infrastructure stack required to support artificial intelligence technologies, according to experts at the Deep Learning Summit in San Francisco. In the past, enterprise architectures coalesced around efficient technology stacks for business processes supported by mainframes, then by minicomputers, client servers, the internet and now cloud computing. But every level of infrastructure is now up for grabs in the rush to take advantage of AI.
“There were well-defined winners that became the default stack around questions like how to run Oracle and what PDP was used for,” said Ashmeet Sidana, founder and managing partner of Engineering Capital, referring to the Programmed Data Processor, an older model of minicomputer
“Now, for the first time, we are seeing that every layer of that stack is up for grabs, from the CPU and GPU all the way up to which frameworks should be used and where to get data from,” said Sidana, who serves as chief engineer of the venture capital firm, based in Menlo Park, Calif.
The stakes are high for building an AI infrastructure — startups, as well as legacy enterprises, could achieve huge advantages by innovating at every level of this emerging stack for AI, according to speakers at the conference.
But the job won’t be easy for CIOs faced with a fast-evolving field where the vendor pecking order is not yet settled, and their technology decisions will have a dramatic impact on software development. An AI infrastructure requires a new development model that requires a more statistical, rather than deterministic, process. On the vendor front, Google’s TensorFlow technology has emerged as an early winner, but it faces production and customization challenges. Making matters more complicated, CIOs also must decide whether to deploy AI infrastructure on private hardware or use the cloud.
New skills required for AI infrastructure
Traditional application development approaches build deterministic apps with well-defined best practices. But AI involves an inherently statistical process. “There is a discomfort in moving from one realm to the other,” Sidana said. Acknowledging this shift and understanding its ramifications will be critical to bringing the enterprise into the machine learning and AI space, he said.
The biggest ramification is also AI’s dirty little secret: The types of AI that will prove most useful to the enterprise, machine learning and especially deep learning approaches, work great only with great data — both quantity and quality. With algorithms becoming more commoditized, what used to be AI’s major rate-limiting feature — the complexity of developing the software algorithms — is being supplanted by a new hurdle: the complexity of data preparation. “When we have perfect AI algorithms, all the software engineers will become data-preparation engineers,” Sidana said.
Then, there are the all-important platform questions that need to be settled. In theory, CIOs can deploy AI workloads anywhere in the cloud, as cloud providers like Amazon, Google and Microsoft, to name just some, can provide almost bare-metal GPU machines for the most demanding problems. But conference speakers stressed the reality requires CIOs to carefully analyze their needs and objectives before making a decision.
There are a number of deep learning frameworks, but most are focused on academic research. Google’s is perhaps the most mature framework from a production standpoint, but it still has limitations, AI experts noted at the conference.
Eli David, CTO of Deep Instinct, a startup based in Tel Aviv that applies deep learning to cybersecurity, said TensorFlow is a good choice when implementing specific kinds of well-defined workloads like image recognition or speech recognition.
But he cautioned it requires heavy customization for seemingly simple changes like analyzing circular, rather than rectangular, images. “You can do high-level things with the building blocks, but the moment you want to do something a bit different, you cannot do that easily,” David said.
The machine learning platform that Deep Instinct built to improve the detection of cyberthreats by analyzing infrastructure data, for example, was designed to ingest a number of data types that are not well-suited to TensorFlow or existing cloud AI services. As a result, the company built its own deep learning systems on private infrastructure, rather than running it in the cloud.
“I talk to many CIOs that do machine learning in a lab, but have problems in production, because of the inherent inefficiencies in TensorFlow,” David said. He said his team also encountered production issues with implementing deep learning inference algorithms based on TensorFlow on devices with limited memory that require dependencies on external libraries. As more deep learning frameworks are designed for production, rather than just for research environments, he said he expects providers will address these issues.
Separate training from deployment
It is also important for CIOs to make a separation between training and deployment of deep learning algorithms, said Evan Sparks, CEO of San Francisco-based Determined AI, a service for training and deploying deep learning models. The training side often benefits from the latest and fastest GPUs. Deployments are another matter. “I pushed back on the assumption that deep learning training has to happen in the cloud. A lot of people we talk to eventually realize that cloud GPUs are five to 10 times more expensive than buying them on premise,” Sparks said.
Deployment targets can include web services, mobile devices or autonomous cars. The latter may have power, processing efficiency and latency issues that may be critical and might not be able to depend on a network. “I think when you see friction when moving from research to deployment, it is as much about the researchers not designing for deployment as limitations in the tools,” Sparks said.
Artificial intelligence can offer enterprises a significant competitive advantage for some strategic applications. But enterprise IT shops will also require DevOps infrastructure automation to keep up with frequent iterations.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Most enterprise shops won’t host AI apps in-house, but those that do will turn to sophisticated app-level automation techniques to manage IT infrastructure. And any enterprise that wants to inject AI into its apps will require rapid application development and deployment — a process early practitioners call “DevOps on steroids.”
“When you’re developing your models, there’s a rapid iteration process,” said Michael Bishop, CTO of Alpha Vertex, a fintech startup in New York that specializes in AI data analysis of equities markets. “It’s DevOps on steroids because you’re trying to move quickly, and you may have thousands of features you’re trying to factor in and explore.”
DevOps principles of rapid iteration will be crucial to train AI algorithms and to make changes to applications based on the results of AI data analysis at Nationwide Mutual Insurance Co. The company, based in Columbus, Ohio, experiments with IBM’s Watson AI system to predict whether new approaches to the market will help it sell more insurance policies and to analyze data collected from monitoring devices in customers’ cars that help it set insurance rates.
“You’ve got to have APIs and microservices,” said Carmen DeArdo, technology director responsible for Nationwide’s software delivery pipeline. “You’ve got to deploy more frequently to respond to those feedback loops and the market.”
This puts greater pressure on IT ops to provide developers and data scientists with self-service access to an automated infrastructure. Nationwide relies on ChatOps for self-service, as chatbots limit how much developers switch between different interfaces for application development and infrastructure troubleshooting. ChatOps also allows developers to correct application problems before they enter a production environment.
AI apps push the limits of infrastructure automation
Enterprise IT pros who support AI apps quickly find that no human can keep up with the required rapid pace of changes to infrastructure. Moreover, large organizations must deploy many different AI algorithms against their data sets to get a good return on investment, said Michael Dobrovolsky, executive director of the machine learning practice and global development at financial services giant Morgan Stanley in New York.
“The only way to make AI profitable from an enterprise point of view is to do it at scale; we’re talking hundreds of models,” Dobrovolsky said. “They all have different lifecycles and iteration [requirements], so you need a way to deploy it and monitor it all. And that is the biggest challenge right now.”
Houghton Mifflin Harcourt, an educational book and software publisher based in Boston, has laid the groundwork for AI apps with infrastructure automation that pairs Apache Mesos for container orchestration with Apache Aurora, an open source utility that allows applications to automatically request infrastructure resources.
Robert Allendirector of engineering, Houghton Mifflin Harcourt
“Long term, the goal is to put all the workload management in the apps themselves, so that they manage all the scheduling,” said Robert Allen, director of engineering at Houghton Mifflin Harcourt. “I’m more interested in two-level scheduling [than container orchestration], and I believe managing tasks in that way is the future.”
Analysts agreed application-driven infrastructure automation will be ideal to support AI apps.
“The infrastructure framework for this will be more and more automated, and the infrastructure will handle all the data preparation and ingestion, algorithm selection, containerization, and publishing of AI capabilities into different target environments,” said James Kobielus, analyst with Wikibon.
Automated, end-to-end, continuous release cycles are a central focus for vendors, Kobielus said. Tools from companies such as Algorithmia can automate the selection of back-end hardware at the application level, as can services such as Amazon Web Services’ (AWS) SageMaker. Some new infrastructure automation tools also provide governance features such as audit trails on the development of AI algorithms and the decisions they make, which will be crucial for large enterprises.
Early AI adopters favor containers and serverless tech
Until app-based automation becomes more common, companies that work with AI apps will turn to DevOps infrastructure automation based on containers and serverless technologies.
Veritone, which provides AI apps as a service to large customers such as CBS Radio, uses Iron Functions, now the basis for Oracle’s Fn serverless product, to orchestrate containers. The company, based in Costa Mesa, Calif., evaluated Lambda a few years ago, but saw Iron Functions as a more suitable combination of functions as a service and containers. With Iron Functions, containers can process more than one event at a time, and functions can attach to a specific container, rather than exist simply as snippets of code.
“If you have apps like TensorFlow or things that require libraries, like [optical character recognition], where typically you have to use Tesseract and compile C libraries, you can’t put that into functions AWS Lambda has,” said Al Brown, senior vice president of engineering for Veritone. “You need a container that has the whole environment.”
Veritone also prefers this approach to Kubernetes and Mesos, which focus on container orchestration only.
“I’ve used Kubernetes and Mesos, and they’ve provided a lot of the building blocks,” Brown said. “But functions let developers focus on code and standards and scale it without having to worry about [infrastructure].”
Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at email@example.com or follow @PariseauTT on Twitter.
Today’s innovations in technology are opening new doors for retailers. The ability to infuse data and intelligence in all areas of a business has the potential to completely reinvent retail. Here’s a visual look at the top technologies we see enabling this transformation in 2018 and beyond, and where they’ll have the greatest impact.