Tag Archives: rise

For insider threat programs, HR should provide checks and balances

Insider threats are on the rise and firms are doing more to stop them, according to a new report from Forrester Research. But it warns that insider threat programs can hurt employee engagement and productivity.

One of the ways companies are trying to curtail insider threats is by analyzing employee personal data to better detect suspicious or risky behavior. But IT security may go overboard in its collection process, security may be too stringent, and practices such as social media monitoring might “lead to eroded employee trust,” Forrester warns.

An insider threat program can turn adversarial, impacting employees in negative ways. It’s up to HR to work with IT security to provide the checks and balances, said Joseph Blankenship, vice president and research director of security and risk at Forrester.

Blankenship further discussed project delays in this Q&A. His responses were edited for clarity and length.

Insider threats are increasing. In 2015, malicious insiders accounted for about 26% of internal data breaches. And in 2019, it was 48%, according to Forrester’s survey data. Why this increase?

Joseph BlankenshipJoseph Blankenship

Joseph Blankenship: I think it’s twofold. You have the ability for users to monetize data and move data in large quantities like they’ve never had before. The ease of moving that data — and the portability of that data — is one factor. The other big factor is we’re looking for [threats] more often. The tools are better. Whenever we see a new capability for threat detection, that’s usually the period when we see this increase [in discovered incidents].

Nonetheless, this must be a stunning finding for a lot of firms. How do they respond to it?

Blankenship: Probably like the stages of grief. We see that pattern quite a bit in security. An event happens, and we realized we are at risk for that event happening again. So now we put effort behind it. We put budget behind it, we buy technology, we build a program and things improve.

Accidental release of internal data accounted for 43% of all insider incidents. What does that say about training?

Blankenship: It’s also culture. Do employees actually understand why the [security] policy is there? Some of that is people trying to get around policies. They find that the security policy is restrictive. You see some of that when people decide to work on their own laptop and their laptop gets stolen. It’s usually people that are somewhat well-meaning, but they find that the policy is getting in their way. Those are all mistakes. Those are all policy violations.

Types of insider threats
Types of insider threats

Who is responsible in a company for ensuring that the employees understand the rules?

Blankenship: Typically it’s the CISO’s responsibility to do this kind of security education.

Is this primarily the job of the IT security department?

Blankenship: Certainly, it’s in partnership with human resources.

IT manages the internal security program, but many of the risks from an insider threat program are HR-related such as increased turnover or hiring. The HR department’s metrics suffer if the program creates employee friction. Is that the case?

Blankenship: I don’t think that’s necessarily the case. You have to make the employee aware: ‘Hey, we’re doing this kind of monitoring because we have important customer data. We can’t afford a breach of customer trust. We’re doing this monitoring because we have intellectual property.’ Things become a lot less scary, a lot less onerous, when people understand the reasons why. If it’s too heavy-handed, if we’re doing things to either punish employees or make their jobs really difficult, it does create that adversarial relationship.

What is the best practice here? Should HR or IT spell out exactly what they do to protect company security?

Blankenship: I don’t know if you get into all the specifics of a security program, but make the employees aware. ‘We’re going to be monitoring things like email. We may be monitoring your computer usage.’  

What is HR’s role in helping the company implement these policies?

Because HR is the part of the company responsible for employee experience, it is very much incumbent on them to work with the security department and keep it a little bit honest.
Joseph BlankenshipVice president and research director, Forrester Research

Blankenship: Because HR is the part of the company responsible for employee experience, it is very much incumbent on them to work with the security department and keep it a little bit honest. I’m sure there are a lot of security folks that would love to really turn up the dial on security policies. If you remember some years ago, the big debate was should we allow personal internet usage on company issued devices. There were lots of security reasons why we would say, ‘absolutely not.’ However, the employee experience dictated that we had to allow some of that activity, otherwise we wouldn’t be able to recruit any new employees. We really had to find the balance.

It sounds as if HR’s responsibility here is to provide some checks and balances.

Blankenship: There’s checks and balances as well as helping [IT security] to design the education program. There’s probably not a lot of security technologists that are amazing at building culture, but that is absolutely the job of good HR professionals.

Go to Original Article
Author:

Recovering from ransomware soars to the top of DR concerns

The rise of ransomware has had a significant effect on modern disaster recovery, shaping the way we protect data and plan a recovery. It does not bring the same physical destruction of a natural disaster, but the effects within an organization — and on its reputation — can be lasting.

It’s no wonder that recovering from ransomware has become such a priority in recent years.

It’s hard to imagine a time when ransomware wasn’t a threat, but while cyberattacks date back as far as the late 1980s, ransomware in particular has had a relatively recent rise in prominence. Ransomware is a type of malware attack that can be carried out in a number of ways, but generally the “ransom” part of the name comes from one of the ways attackers hope to profit from it. The victim’s data is locked, often behind encryption, and held for ransom until the attacker is paid. Assuming the attacker is telling the truth, the data will be decrypted and returned. Again, this assumes that the anonymous person or group that just stole your data is being honest.

“Just pay the ransom” is rarely the first piece of advice an expert will offer. Not only do you not know if payment will actually result in your computer being unlocked, but developments in backup and recovery have made recovering from ransomware without paying the attacker possible. While this method of cyberattack seems specially designed to make victims panic and pay up, doing so does not guarantee you’ll get your data back or won’t be asked for more money.

Disaster recovery has changed significantly in the 20 years TechTarget has been covering technology news, but the rapid rise of ransomware to the top of the potential disaster pyramid is one of the more remarkable changes to occur. According to a U.S. government report, by 2016 4,000 ransomware attacks were occurring daily. This was a 300% increase over the previous year. Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon. In this brief retrospective, take a look back at the major attacks that made headlines, evolving advice and warnings regarding ransomware, and how organizations are fighting back.

In the news

The appropriately named WannaCry ransomware attack began spreading in May 2017, using an exploit leaked from the National Security Agency targeting Windows computers. WannaCry is a worm, which means that it can spread without participation from the victims, unlike phishing attacks, which require action from the recipient to spread widely.

Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon.

How big was the WannaCry attack? Affecting computers in as many as 150 countries, WannaCry is estimated to have caused hundreds of millions of dollars in damages. According to cyber risk modeling company Cyence, the total costs associated with the attack could be as high as $4 billion.

Rather than the price of the ransom itself, the biggest issue companies face is the cost of being down. Because so many organizations were infected with the WannaCry virus, news spread that those who paid the ransom were never given the decryption key, so most victims did not pay. However, many took a financial hit from the downtime the attack caused. Another major attack in 2017, NotPetya, cost Danish shipping giant A.P. Moller-Maersk hundreds of millions of dollars. And that’s just one victim.

In 2018, the city of Atlanta’s recovery from ransomware ended up costing more than $5 million, and shut down several city departments for five days. In the Matanuska-Susitna borough of Alaska in 2018, 120 of 150 servers were affected by ransomware, and the government workers resorted to using typewriters to stay operational. Whether it is on a global or local scale, the consequences of ransomware are clear.

Ransomware attacks
Ransomware attacks had a meteoric rise in 2016.

Taking center stage

Looking back, the massive increase in ransomware attacks between 2015 and 2016 signaled when ransomware really began to take its place at the head of the data threat pack. Experts not only began emphasizing the importance of backup and data protection against attacks, but planning for future potential recoveries. Depending on your DR strategy, recovering from ransomware could fit into your current plan, or you might have to start considering an overhaul.

By 2017, the ransomware threat was impossible to ignore. According to a 2018 Verizon Data Breach Report, 39% of malware attacks carried out in 2017 were ransomware, and ransomware had soared from being the fifth most common type of malware to number one.

Verizon malware report
According to the 2018 Verizon Data Breach Investigations Report, ransomware was the most prevalent type of malware attack in 2017.

Ransomware was not only becoming more prominent, but more sophisticated as well. Best practices for DR highlighted preparation for ransomware, and an emphasis on IT resiliency entered backup and recovery discussions. Protecting against ransomware became less about wondering what would happen if your organization was attacked, and more about what you would do when your organization was attacked. Ransomware recovery planning wasn’t just a good idea, it was a priority.

As a result of the recent epidemic, more organizations appear to be considering disaster recovery planning in general. As unthinkable as it may seem, many organizations have been reluctant to invest in disaster recovery, viewing it as something they might need eventually. This mindset is dangerous, and results in many companies not having a recovery plan in place until it’s too late.

Bouncing back

While ransomware attacks may feel like an inevitability — which is how companies should prepare — that doesn’t mean the end is nigh. Recovering from ransomware is possible, and with the right amount of preparation and help, it can be done.

The modern backup market is evolving in such a way that downtime is considered practically unacceptable, which bodes well for ransomware recovery. Having frequent backups available is a major element of recovering, and taking advantage of vendor offerings can give you a boost when it comes to frequent, secure backups.

Vendors such as Reduxio, Nasuni and Carbonite have developed tools aimed at ransomware recovery, and can have you back up and running without significant data loss within hours. Whether the trick is backdating, snapshots, cloud-based backup and recovery, or server-level restores, numerous tools out there can help with recovery efforts. Other vendors working in this space include Acronis, Asigra, Barracuda, Commvault, Datto, Infrascale, Quorum, Unitrends and Zerto.

Along with a wider array of tech options, more information about ransomware is available than in the past. This is particularly helpful with ransomware attacks, because the attacks in part rely on the victims unwittingly participating. Whether you’re looking for tips on protecting against attacks or recovering after the fact, a wealth of information is available.

The widespread nature of ransomware is alarming, but also provides first-hand accounts of what happened and what was done to recover after the attack. You may not know when ransomware is going to strike, but recovery is no longer a mystery.

Go to Original Article
Author:

Top Office 365 MFA considerations for administrators

With the rise in data breach incidents reported by companies of all sizes, it doesn’t take much effort to find a cache of leaked passwords that can be used to gain unauthorized access to email or another online service.

Administrators can make users produce complex passwords and change them frequently to ensure they set a different password for different applications or systems. It’s a helpful way to keep hackers from guessing a login, but it’s a practice that can backfire. Many users struggle with memorizing password variations, which tends to lead to one complex password used across multiple systems. Industrious hackers who find a password dump can assume some end users will use the same password — or a variation of it — across multiple workloads online to make it easier to pry their way into other systems.

IT departments in the enterprise realize that unless they implement specific password policies and enforce them, their systems may be at risk of a hack attempt. To mitigate these risks, many administrators will try multifactor authentication (MFA) products to address some of the identity concerns. MFA is the technology that adds another layer of authentication after users enter their password to confirm their identity, such as a biometric verification or a code sent via text to their phone. An organization that has moved its collaboration workloads to Microsoft’s cloud has a few Office 365 MFA options.

When considering an MFA product, IT administrators must consider several key areas, especially when some of the services they may subscribe to, such as Microsoft Azure and Office 365, include MFA functionality from Microsoft. Depending on the level of functionality needed and services covered by MFA, IT administrators might consider selecting a third-party vendor, even when that choice will require more configuration work with Active Directory and cloud services. IT workers unfamiliar with MFA technology can look over the following areas to help with the selection process.

When considering the purchase of an MFA product, IT administrators must consider several key areas, especially when some of the services they may subscribe to, such as Microsoft Azure and Office 365, include MFA functionality from Microsoft.

Choosing the right authentication options for end users

IT administrators must investigate what will work best for their end users because there are several options to choose from when it comes to MFA. Some products use phone calls for confirmation, code via text messaging, key fobs, an authenticator app and even facial recognition. Depending on what the consensus is in the organization, the IT decision-makers have to work through the evaluation process to make sure the vendor supports the option they want.

Identifying which MFA product supports cloud workloads

More organizations have adopted some cloud service, such as Office 365, Azure, AWS and other public clouds. The MFA product must adapt to the needs of the organization as it adds more cloud services. While Microsoft offers its own MFA technology that works with Office 365, other vendors such as Duo Security — owned by Cisco — and Okta support Office 365 MFA for companies that want to use a third-party product.

Potential problems that can affect Office 365 MFA users

Using Office 365 MFA helps improve security, but there is potential for trouble that blocks access for end users. This can happen when a phone used for SMS confirmation breaks or is out of the user’s possession. Users might not gain access to the system or the services they need until they recover their device or change their MFA configuration.

Another possible problem to the authentication process can happen on the other end if the MFA product goes down and blocks access for everyone who has enabled MFA. These probabilities require IT to discuss and plan before implementing Office 365 MFA for the appropriate steps to be taken if these issues arise.

Evaluate the overall costs and features related to MFA

For the most part, MFA products are subscription-based that charge a monthly fee per user. Some vendors, such as Microsoft, bundle MFA with self-service identity, access management, access reporting and self-service group management. Third-party vendors might offer different MFA features; as one example, Duo Security includes self-enrollment and management, user risk assessment with phishing simulation, and device access monitoring and identification with its MFA product.

Single sign-on, identity management and identity monitoring are all valuable features that, if included with an MFA offering, should be worth considering when it’s time to narrow the vendor list.

Go to Original Article
Author:

Boost your ecommerce revenue with Dynamics 365 Fraud Protection – Dynamics 365 Blog

With the booming growth of online technologies and marketplaces comes the burgeoning rise of a variety of cybersecurity challenges for businesses that conduct any aspect of their operations through online software and the Internet. Fraud is one of the most pervasive trends of the modern online marketplace, and continues to be a consistent, invasive issue for all businesses.

As the rate of payment fraud continues to rise, especially in retail ecommerce where the liability lies with the merchant, so does the amount companies spend each year to combat and secure themselves against it. Fraud and wrongful rejections already significantly impact merchants’ bottom-line in a booming economy and as well as when the economy is soft.

The impact of outdated fraud detection tools and false alarms

Customers, merchants, and banking institutions have been impacted for years by suboptimal experiences, increased operational expenses, wrongful rejections, and reduced revenue. To combat these negative business impacts, companies have been implementing layered solutions. For example, merchant risk managers are bogged down with manual reviews and analysis of their own local 30/60/90-day historical data. These narrow, outdated views of data provide a partial hindsight view of fraud trends, leaving risk managers with no real-time information to work with when creating new rules to hopefully minimize fraud loss.

One of the most common ways that fraud impacts everyday consumers and business is through wrongful rejections. For example, when a merchant maintains an outdated and/or strict set of transaction rules and algorithms, a customer who initiates a retail ecommerce transaction through a credit card might experience a wrongful rejection known to consumers as a declined transaction, because of these outdated rules. Similarly, wrongful declined transactions can also happen when the card issuing bank refuses to authorize the purchase using the card due to suspicion of fraud. The implications of these suboptimal experiences for all parties involved (customers, merchants, and banks) directly correlates into loss of credibility, security, and business revenue.

Introducing Microsoft Dynamics 365 Fraud Protection

As one of the biggest technology organizations in the world, Microsoft saw an opportunity to provide software as a service that effectively and visibly helps reduce the rate and pervasiveness of fraud while simultaneously helping to reduce wrongful declined transactions and improving customer experience. Microsoft Dynamics 365 Fraud Protection is a cloud-based solution merchants can use in real-time to help lower their costs related to combatting fraud, help increase their revenue by improving acceptance of legitimate transactions, reduce friction in customer experience, and integrate easily into their existing order management system and payment stack. This solution offers a global level of fraud insights using data sets from participating merchants that are processed with real-time machine learning to detect and mitigate evolving fraud schemes in a timely manner.

Microsoft Dynamics 365 Fraud Protection houses five powerful capabilities designed to capitalize on the power of machine learning to provide merchants with an innovative fraud protection solution:

  • Adaptive AI technology continuously learns and adapts from patterns and trends and will equip fraud managers with the tools and data they need to make informed decisions on how to optimize their fraud controls.
  • A fraud protection network maintains up-to-date connected data that provides a global view of fraud activity and maintains the security of merchants’ confidential information and shoppers’ privacy.
  • Transaction acceptance booster shares transactional trust knowledge with issuing banks to help boost authorization rates.
  • Customer escalation support provides detailed risk insights about each transaction to help improve merchants’ customer support experience.
  • Account creation protection monitors account creation, helps minimize abuse and automated attacks on customer accounts, and helps to avoid incurring losses due to fraudulent accounts

See the image below to learn more about the relationship between merchants and banks when they both use Dynamics 365 Fraud Protection:

Banks worldwide can choose to participate in the Dynamics 365 Fraud Protection transaction acceptance booster feature to increase acceptance rates of legitimate authorization requests from online merchants using Dynamics 365 Fraud Protection. Merchants using the product can opt to use this feature to increase acceptance rates for authorization requests made to banks without having to make any changes to their existing authorization process.

Learn more

This week at Sibos 2019 in London, Microsoft will be showcasing its secure and compliant cloud solutions for the banking industry. Read a round-up of announcements unveiled at Sibos and  view an agenda of Microsoft events and sessions at the show. Stop by our booth (Z131) for a showcase of applications relevant to banking, including Microsoft Dynamics 365 Fraud Protection, which will be generally available on October 1st, 2019. Contact your Microsoft representative to get started.

Go to Original Article
Author: Microsoft News Center

VMware’s Bitnami acquisition grows its development portfolio

The rise of containers and the cloud has changed the face of the IT market, and VMware must evolve with it. The vendor has moved out of its traditional data center niche and — with its purchase of software packager Bitnami — has made a push into the development community, a change that presents new challenges and potential. 

Historically, VMware delivered a suite of system infrastructure management tools. With the advent of cloud and digital disruption, IT departments’ focus expanded from monitoring systems to developing applications. VMware has extended its management suite to accommodate this shift, and its acquisition of Bitnami adds new tools that ease application development.

Building applications presents difficulties for many organizations. Developers spend much of their time on application plumbing, writing software that performs mundane tasks — such as storage allocation — and linking one API to another.

Bitnami sought to simplify that work. The company created prepackaged components called installers that automate the development process. Rather than write the code themselves, developers can now download Bitnami system images and plug them into their programs. As VMware delves further into hybrid cloud market territory, Bitnami brings simplified app development to the table.

Torsten Volk, managing research director at Enterprise Management AssociatesTorsten Volk

“Bitnami’s solutions were ahead of their time,” said Torsten Volk, managing research director at Enterprise Management Associates (EMA), a computer consultant based out of Portsmouth, New Hampshire. “They enable developers to bulletproof application development infrastructure in a self-service manner.”

The value Bitnami adds to VMware

Released under the Apache License, Bitnami’s modules contain commonly coupled software applications instead of just bare-bones images. For example, a Bitnami WordPress stack might contain WordPress, a database management system (e.g., MySQL) and a web server (e.g., Apache).

Bitnami takes care of several mundane programming chores. Its keeps all components up-to-date — so if it finds a security problem, it patches that problem — and updates those components’ associated libraries. Bitnami makes its modules available through its Application Catalogue, which functions like an app store.

The company designed its products to run on a wide variety of systems. Bitnami supports Apple OS X, Microsoft Windows and Linux OSes. Its VM features work with VMware ESX and ESXi, VirtualBox and QEMU. Bitnami stacks also are compatible with software infrastructures such as WAMP, MAMP, LAMP, Node.js, Tomcat and Ruby. It supports cloud tools from AWS, Azure, Google Cloud Platform and Oracle Cloud. The installers, too, feature a wide variety of platforms, including Abante Cart, Magento, MediaWiki, PrestaShop, Redmine and WordPress. 

Bitnami seeks to help companies build applications once and run them on many different configurations.

“For enterprise IT, we intend to solve for challenges related to taking a core set of application packages and making them available consistently across teams and clouds,” said Milin Desai, general manager of cloud services at VMware.

Development teams share project work among individuals, work with code from private or public repositories and deploy applications on private, hybrid and public clouds. As such, Bitnami’s flexibility made it appealing to developers — and VMware.

How Bitnami and VMware fit together

[VMware] did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.
Torsten VolkManaging Research Director, EMA

VMware wants to extend its reach from legacy, back-end data centers and appeal to more front-end and cloud developers.

“In the last few years, VMware has gone all in on trying to build out a portfolio of management solutions for application developers,” Volk said. VMware embraced Kubernetes and has acquired container startups such as Heptio to prove it.

Bitnami adds another piece to this puzzle, one that provides a curated marketplace for VMware customers who hope to emphasize rapid application development.

“Bitnami’s application packaging capabilities will help our customers to simplify the consumption of applications in hybrid cloud environments, from on-premises to VMware Cloud on AWS to VMware Cloud Provider Program partner clouds, once the deal closes,” Desai said.

Facing new challenges in a new market

However, the purchase moves VMware out of its traditional virtualized enterprise data center sweet spot. VMware has little name recognition among developers, so the company must build its brand.

“Buying companies like Bitnami and Heptio is an attempt by VMware to gain instant credibility among developers,” Volk said. “They did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.”  

Supporting a new breed of customer poses its challenges. Although VMware’s Bitnami acquisition adds to its application development suite — an area of increasing importance — it also places new hurdles in front of the vendor. Merging the culture of a startup with that of an established supplier isn’t always a smooth process. In addition, VMware has bought several startups recently, so consolidating its variety of entities in a cohesive manner presents a major undertaking.

Go to Original Article
Author:

Ponemon: Mega breaches, data breach costs on the rise

The Ponemon Institute’s latest study on data breach costs highlights the rise of what it calls “mega breaches,” which are the worst types of security incidents in terms of costs and data exposed.

The “2018 Cost of a Data Breach Study: Global Overview,” which was sponsored by IBM Security, details the cost enterprises incur after falling victim to a data breach and found that the average total cost of a data breach rose from $3.62 to $3.86 million — a 6.4% increase — with $148 as the average cost per lost or stolen record. This year’s report also features data on the biggest breaches, which Ponemon and IBM have termed “mega breaches.”

“Mega breaches are where there are more than one million records that have been breached,” Limor Kessem, executive security advisor at IBM, told SearchSecurity. “And then we looked at up to 50 million [records exposed], although it could be up to infinity these days. Just last year there were 2.9 billion records exposed, and in 2016 there were over 4 billion records exposed, so a breach can be millions and hundreds of millions as well.”

Given that this is the first year that Ponemon has included mega breaches in its annual report and that there were only 11 mega breaches that occurred, there was no data from past years to compare these findings to. However, the report found that a mega breach with the minimum of 1 million records exposed lead to an average total cost of $40 million, while a mega breach with 50 million records exposed had an average cost of $350 million.

 After collecting data from more than 2,500 separate interviews that were conducted over a 10-month period with 477 enterprises, the study concluded that mega breaches take 365 days to identify, which is almost 100 days shorter than typical breaches (266 days to detect).

The Ponemon study also discovered that “data breaches are the most costly in the United States and the Middle East and least costly in Brazil and India,” given that the average total in the United States was $7.91 million. “The U.S. topped the chart at almost twice the international average,” Kessem said. “Of course there are currency differences, but the big thing in the U.S. is loss of business.”

Kessem further noted that when consumers were interviewed, 75% of them said they would not want to do business with a company that they didn’t trust to safeguard their data.

“People in the U.S. are very aware of breaches,” she said. “They topped the charts in awareness of how [data breaches] happen and how many happen and so on. In other words, we know breaches are happening and we wouldn’t like to do business with those who can’t protect our data and I think this was a major cost center for the U.S. in terms of data breaches.”

In addition to the cost per record, companies also experience direct and indirect costs after a breach. For example, Canada has the highest direct costs, according to the report, but the U.S. had the highest indirect cost at $152 per capita, which includes “employee’s time, effort and other organizational resources spent notifying victims and investigating the incident.” The study also highlights the idea that breaches in the healthcare industry are the most expensive and have been consistently so for several years, according to Kessem, considering the amount of personal data healthcare companies possess. 

“Typically [healthcare companies] have a lot of personally identifiable information,” she said. “They’re also going to have payment information and contact information — the more information is attached to an identity, the more it is going to cost.”

Post-breach consequences are further addressed in the report, which states, “Organizations that lost less than one percent of their customers due to a data breach resulted in an average total cost of $2.8 million.” However, the Ponemon study also noted that an incident response team has the ability to reduce the cost by as much as $14 per compromised record — a small change that would greatly add up at the end of a breach.

Microsoft to acquire Bonsai in move to build ‘brains’ for autonomous systems – The Official Microsoft Blog

Group shot of Bonsai's team members
Bonsai’s team members. Photo courtesy of Bonsai.

With AI’s meteoric rise, autonomous systems have been projected to grow to more than 800 million in operation by 2025. However, while envisioned in science fiction for a long time, truly intelligent autonomous systems are still elusive and remain a holy grail. The reality today is that training autonomous systems that function amidst the many unforeseen situations in the real world is very hard and requires deep expertise in AI — essentially making it unscalable.

To achieve this inflection point in AI’s growth, traditional machine learning methodologies aren’t enough. Bringing intelligence to autonomous systems at scale will require a unique combination of the new practice of machine teaching, advances in deep reinforcement learning and leveraging simulation for training. Microsoft has been on a path to make this a reality through continued AI research breakthroughs; the development of the powerful Azure AI platform of tools, services and infrastructure; advances in deep learning including our acquisition of Maluuba, and the impressive efficiencies we’ve achieved in simulation-based training with Microsoft Research’s AirSim tool. With software developers at the center of digital transformation, our pending acquisition of GitHub further underscores just how imperative it is that we empower developers to break  through and lead this next wave of innovation.

Today we are excited to take another major step forward in our vision to make it easier for developers and subject matter experts to build the “brains”— machine learning modelfor autonomous systems of all kinds with the signing of an agreement to acquire Bonsai. Based in Berkeley, California, and an M12 portfolio company, Bonsai has developed a novel approach using machine teaching that abstracts the low-level mechanics of machine learning, so that subject matter experts, regardless of AI aptitude, can specify and train autonomous systems to accomplish tasks. The actual training takes place inside a simulated environment.

The company is building a general-purpose, deep reinforcement learning platform especially suited for enterprises leveraging industrial control systems such as robotics, energy, HVAC, manufacturing and autonomous systems in general. This includes unique machine-teaching innovations, automated model generation and management, a host of APIs and SDKs for simulator integration, as well as pre-built support for leading simulations all packaged in one end-to-end platform.

Bonsai’s platform combined with rich simulation tools and reinforcement learning work in Microsoft Research becomes the simplest and richest AI toolchain for building any kind of autonomous system for control and calibration tasks. This toolchain will compose with Azure Machine Learning running on the Azure Cloud with GPUs and Brainwave, and models built with it will be deployed and managed in Azure IoT, giving Microsoft an end-to-end solution for building, operating and enhancing “brains” for autonomous systems.

What I find exciting is that Bonsai has achieved some remarkable breakthroughs with their approach that will have a profound impact on AI development. Last fall, they established a new reinforcement learning benchmark for programming industrial control systems. Using a robotics task to demonstrate the achievement, the platform successfully trained a simulated robotic arm to grasp and stack blocks on top of one another by breaking down the task into simpler sub-concepts. Their novel technique performed 45 times faster than a comparable approach from Google’s DeepMind. Then, earlier this year, they extended deep reinforcement learning’s capabilities beyond traditional game play, where it’s often demonstrated, to real-world applications. Using Bonsai’s AI Platform and machine teaching, subject matter experts from Siemens, with no AI expertise, trained an AI model to autocalibrate a Computer Numerical Control machine 30 times faster than the traditional approach. This represented a huge milestone in industrial AI, and the implications when considered across the broader sector are just staggering.

To realize this vision of making AI more accessible and valuable for all, we have to remove the barriers to development, empowering every developer, regardless of machine learning expertise, to be an AI developer. Bonsai has made tremendous progress here and Microsoft remains committed to furthering this work. We already deliver the most comprehensive collection of AI tools and services that make it easier for any developer to code and integrate pre-built and custom AI capabilities into applications and extend to any scenario. There are over a million developers using our pre-built Microsoft Cognitive Services, a collection of intelligent APIs that enable developers to easily leverage high-quality vision, speech, language, search and knowledge technologies in their apps with a few lines of code. And last fall, we led a combined industry push to foster a more open AI ecosystem, bringing AI advances to all developers, on any platform, using any language through the introduction of the Open Neural Network Exchange (ONNX) format and Gluon open source interface for deep learning.

We’re really confident this unique marriage of research, novel approach and technology will have a tremendous effect toward removing barriers and accelerating the current state of AI development. We look forward to having Bonsai and their team join us to help realize this collective vision.

Tags: , ,

Microsoft to acquire Bonsai in move to build ‘brains’ for autonomous systems – The Official Microsoft Blog

Group shot of Bonsai's team members
Bonsai’s team members. Photo courtesy of Bonsai.

With AI’s meteoric rise, autonomous systems have been projected to grow to more than 800 million in operation by 2025. However, while envisioned in science fiction for a long time, truly intelligent autonomous systems are still elusive and remain a holy grail. The reality today is that training autonomous systems that function amidst the many unforeseen situations in the real world is very hard and requires deep expertise in AI — essentially making it unscalable.

To achieve this inflection point in AI’s growth, traditional machine learning methodologies aren’t enough. Bringing intelligence to autonomous systems at scale will require a unique combination of the new practice of machine teaching, advances in deep reinforcement learning and leveraging simulation for training. Microsoft has been on a path to make this a reality through continued AI research breakthroughs; the development of the powerful Azure AI platform of tools, services and infrastructure; advances in deep learning including our acquisition of Maluuba, and the impressive efficiencies we’ve achieved in simulation-based training with Microsoft Research’s AirSim tool. With software developers at the center of digital transformation, our pending acquisition of GitHub further underscores just how imperative it is that we empower developers to break  through and lead this next wave of innovation.

Today we are excited to take another major step forward in our vision to make it easier for developers and subject matter experts to build the “brains”— machine learning modelfor autonomous systems of all kinds with the signing of an agreement to acquire Bonsai. Based in Berkeley, California, and an M12 portfolio company, Bonsai has developed a novel approach using machine teaching that abstracts the low-level mechanics of machine learning, so that subject matter experts, regardless of AI aptitude, can specify and train autonomous systems to accomplish tasks. The actual training takes place inside a simulated environment.

The company is building a general-purpose, deep reinforcement learning platform especially suited for enterprises leveraging industrial control systems such as robotics, energy, HVAC, manufacturing and autonomous systems in general. This includes unique machine-teaching innovations, automated model generation and management, a host of APIs and SDKs for simulator integration, as well as pre-built support for leading simulations all packaged in one end-to-end platform.

Bonsai’s platform combined with rich simulation tools and reinforcement learning work in Microsoft Research becomes the simplest and richest AI toolchain for building any kind of autonomous system for control and calibration tasks. This toolchain will compose with Azure Machine Learning running on the Azure Cloud with GPUs and Brainwave, and models built with it will be deployed and managed in Azure IoT, giving Microsoft an end-to-end solution for building, operating and enhancing “brains” for autonomous systems.

What I find exciting is that Bonsai has achieved some remarkable breakthroughs with their approach that will have a profound impact on AI development. Last fall, they established a new reinforcement learning benchmark for programming industrial control systems. Using a robotics task to demonstrate the achievement, the platform successfully trained a simulated robotic arm to grasp and stack blocks on top of one another by breaking down the task into simpler sub-concepts. Their novel technique performed 45 times faster than a comparable approach from Google’s DeepMind. Then, earlier this year, they extended deep reinforcement learning’s capabilities beyond traditional game play, where it’s often demonstrated, to real-world applications. Using Bonsai’s AI Platform and machine teaching, subject matter experts from Siemens, with no AI expertise, trained an AI model to autocalibrate a Computer Numerical Control machine 30 times faster than the traditional approach. This represented a huge milestone in industrial AI, and the implications when considered across the broader sector are just staggering.

To realize this vision of making AI more accessible and valuable for all, we have to remove the barriers to development, empowering every developer, regardless of machine learning expertise, to be an AI developer. Bonsai has made tremendous progress here and Microsoft remains committed to furthering this work. We already deliver the most comprehensive collection of AI tools and services that make it easier for any developer to code and integrate pre-built and custom AI capabilities into applications and extend to any scenario. There are over a million developers using our pre-built Microsoft Cognitive Services, a collection of intelligent APIs that enable developers to easily leverage high-quality vision, speech, language, search and knowledge technologies in their apps with a few lines of code. And last fall, we led a combined industry push to foster a more open AI ecosystem, bringing AI advances to all developers, on any platform, using any language through the introduction of the Open Neural Network Exchange (ONNX) format and Gluon open source interface for deep learning.

We’re really confident this unique marriage of research, novel approach and technology will have a tremendous effect toward removing barriers and accelerating the current state of AI development. We look forward to having Bonsai and their team join us to help realize this collective vision.

Tags: , ,

Dragos’ Robert Lee discusses latest ICS threats, hacking back

Cyberthreats to critical infrastructure are on the rise, but that doesn’t mean the U.S. is about to plunge into catastrophic blackouts, Robert Lee argued.

Lee, founder and CEO of Dragos Inc., which specializes in industrial control system (ICS) security and is based in Hanover, Md., talked at RSA Conference 2018 last month about recent attacks and intrusions on power grids and other critical infrastructure. While ICS threats and the capabilities of advanced persistent threat groups are growing, Lee explained the technical challenges of hacking industrial controls and why those systems are much different than typical IT systems.

In part one of the interview with Lee at RSA Conference, he explained why he’s generally optimistic about the state of ICS security and how enterprises are making improvements. In part two, Lee assesses the latest ICS threats and how such threats can often be exaggerated or misinterpreted. He also takes issue with public cyber attribution, as well as the concept of hacking back.

Editor’s note: This interview has been edited for clarity and length.

You’ve talked in the past about how the decentralized nature of the energy grid isn’t something that lends itself to one attack spreading across a large region and causing a wide-scale blackout, for example. If it’s smaller companies that are mostly vulnerable to ICS threats, does that also lessen the risk?

Robert Lee: Yes, but I wouldn’t imply that attackers can only do the smaller ones. I’m saying I’m optimistic about the movement of all the other ones. But, to your point, what concerns me is not the Eastern Interconnection coming down. What concerns me is a 30-minute power outage in Washington, D.C. If a small municipality like D.C. has a 30-minute power outage, watch what happens to the political and regulatory landscape. And the knee-jerk reaction would cycle through the innovation in the industry for a decade. It’s our own paranoia and fear that amplify anything that actually does occur.

But, again, there are global manufacturing companies that, before we walked in the door, were doing nothing [about ICS security]. It’s not just that larger companies are doing well. It’s just I’m optimistic about the traction I’m seeing in an industry that used to be very much stale.

But I’m also still significantly concerned and not super optimistic about what’s happening with the smaller players. And that’s where we do need to make investments and incentivize some of the movement that we’re seeing pay dividends in the larger players. There’s no reason we couldn’t repeat that with the smaller players.

What were these companies — big or small — doing, if anything, on ICS security before you walked in the door?

Lee: By and large, most of them were doing nothing before. For the ones that were doing something, it was adapting IT security tools to try to fit into ICS, which largely led to a compliance-check-box kind of approach.

It was and is [really] just the basics, like network monitoring. But monitoring the industrial environment to look for threats is nowhere near wide-scale adoption. We see companies largely either doing nothing or adapting systems and technologies that weren’t meant for industrial threats.

You’ve been a critic of how ICS threats have been exaggerated or overplayed. Not to pick on Symantec, but when you saw the Dragonfly 2.0 report detailing a new campaign against the U.S. power grid, and then the news headlines that followed the report, what was your reaction?

Lee: I very much like Symantec as a firm, but I significantly disagree with their assessment. First of all, they said Dragonfly 2.0 was the same adversary group as Dragonfly 1. That’s why they named it that way. Our assessment is completely different. There are three distinct teams, not one, and that means a lot of difference in terms of how you defend against it.

And the second thing that I disagree with professionally was they had their technical people, who were then amplified by a lobbyist, sitting in front of Congress in an open hearing saying there were no technical limitations to causing significant damage to the American power grid. And that is not accurate to what the attack was. I looked at the data. We were tracking the threat before it was public, and that [statement] in no way aligned with the actual reality of the situation.

From what they were doing, could they have caused an hour power outage in D.C. or whatever specific companies they had been targeting? Yes. That sucks, and we have to take that very, very seriously. But could they have caused significant damage to the American power grid? No. It’s not as easy as just flipping a switch. That’s not accurate at all. That’s not how that works.

But IT security companies generally view the hard problem as access, because in IT, that is the hard problem. You get access to a Windows system with all the sensitive data, and it’s gone. Access is the problem in IT security. But access is just the beginning of the discussion with ICS. Once you actually have access to the equipment operating electric power, then you’ve got to figure out how to actually do it.

There have been times before where we’ve set like a pen tester down in front of electric equipment and said, ‘OK, you have full access, physical access, make the lights blink,’ and they weren’t able to do so. I’m not saying it’s not extremely doable, and there are adversaries that know exactly how to do it. But let’s not conflate access as equating to damage for ICS.

When you look at the threat landscape and you see things like CrashOverride and BlackEnergy malware implicated in recent attacks on Ukraine’s power grid, do those things concern you?

Lee: Absolutely. I think one of the unfortunate things is that a lot of American power companies put an amazing amount of effort on [ICS security] post-Ukraine, but for some of them, the idea is, ‘Well, that’s Ukraine. That’s on the other side of the internet.’ And that’s not how this works.

To your question, I did a ‘Little Bobby’ comic specific to this issue. This is how much I think it’s a joke at this point. The actors behind Trisis and the actors behind CrashOverride and the actors behind the 2015 [Ukraine] attack are different groups, and they have already targeted and gained access to infrastructure of the United States. They have not shown the intent to move past that and to commit disruptive attacks. But we can’t say, ‘Oh, yeah, we saw literally the same threats in another country taking down power,’ and then say, ‘Yeah, but that’s over there.’ You can’t have that approach.

If it was a different threat actor and everything else, then maybe we could just be concerned and think about hypothetical scenarios. But when it’s literally the same threat group, we should probably take that fairly seriously.

There’s been a lot of talk about hacking back at RSA Conference. You’ve spoken out about this before.

Lee: It’s stupid. It is absolutely asinine.

Were you surprised that there are people here and outside of the event advocating for hacking back?

Lee: I can appreciate why. The loud voices in the infosec community have made very inaccurate statements over the years. You have probably heard at some point, ‘Well, attackers only have to get one thing right. Defenders have to protect against everything.’ That’s not true at all. That’s not even close to true.

Technically, attackers have to do everything without being detected, and defenders only have to reliably detect one thing in the whole process. So, there’s these clichés that have largely been done by the security industry to sell things that do not line up with reality. And so if you are an executive or a politician and you’ve been told time and time again, ‘Defense fails, defense fails, defense fails,’ well, you’re going to say, ‘Let’s try something different,’ which is hack back on offense. They’re not doing it because they’re malicious. They’re doing it because they’ve been told for years that defense fails.

In reality, we’ve seen the exact opposite. Defense is winning and succeeding. Adversary dwell time, according the latest Mandiant [M-Trends] report, is way down from what it was 10 years ago. When you look at the ability to get an exploit on a system, you don’t really get an exploit on the system anymore. You need a chain of exploits to get there. You have an NSA [National Security Agency] director [Mike Rogers] all but crying in front of Congress to get a backdoor in security products because of how hard it’s become after they lost [surveillance] capabilities. Defense is doing extremely well.

It’s not that I think those people who are recommending hacking back are stupid — I think they’ve been misled. But the idea of hack back is extremely asinine. The idea that that is going to contribute to security at all is extremely silly. Forget the legal ramifications. Forget the fact that you may look like a foreign hacker to some other group while you’re hacking back. Forget the potential collateral damage. It’s just a poor security investment. When people are having trouble tuning firewalls or doing network monitoring, maybe investing in offensive capabilities is not the best return on investment for a company.

With escalating ICS threats and more groups with these capabilities, are we entering an arms race of sorts with critical infrastructure attacks?

If you look at the Trisis malware in Saudi Arabia, there’s no polite or easy way to say it: Whoever designed that capability was intending to kill people.
Robert LeeCEO, Dragos

Lee: It’s going to keep going back and forth. You’re going have the industrial espionage take place and the trade secrets loss take place. All that stuff is going to happen, but the military aspect of it is concerning. And one of the other things that concerns me is intelligence teams preparing for potential military action that could be perceived as actually military action, or an intelligence team potentially making a mistake in sensitive infrastructure and causing what appears to be an attack. This is a very concerning area that we must address.

And also, if you look at the Trisis malware in Saudi Arabia, there’s no polite or easy way to say it: Whoever designed that capability was intending to kill people. That should upset everybody around the world.

You’ve talked in the past about cyber attribution and how it can create problems. Is attribution harder for ICS?

Lee: If anything, it’s probably easier because of the level of capability required to do certain things. It rules out some players. But attribution is not useful. Attribution is in no way useful for security. It’s a political topic, but it distracts [from] the discussion on how to actually defend those systems. And it also causes issues to the private sector.

When the Department of Homeland Security announced that it was Russians breaking into routers, what they effectively did was have every single executive around the country spun up about Russian nation-state hackers instead of allowing the security people to actually address the security of what was mentioned in the advisory itself.

I would say not only is attribution not useful, it can also be very harmful. Now, if the government wants to take action off it, that’s different. For example, if the government wants to impose sanctions against a country because they can move public attribution of cyberattacks, that’s different. But if it’s just to throw a name out, it’s not helpful — it’s harmful.