Tag Archives: announced

Maze ransomware gang pledges to stop attacking hospitals

The notorious Maze ransomware gang announced Wednesday that it will not attack any healthcare organizations during the COVID-19 pandemic.

The pandemic has put a strain hospitals and public health agencies in recent weeks as governments across the globe struggle to contain the spread of COVID-19, also known as the new coronavirus. Some security vendors have expressed concern that coronavirus-related threats could soon include ransomware attacks, which would have a crippling effect on healthcare and government organizations working on treatment and containment of the virus.

But at least one cybercrime outfit is pledging to refrain from such attacks, at least on healthcare organizations. The Maze ransomware gang, which last year began “shaming” victims by exfiltrating and publishing organizations’ sensitive data, promised to ” stop all activity versus all kinds of medical organizations until the stabilization of the situation with virus,” according to an announcement on its website.

BleepingComputer, which first reported the announcement, also contacted other ransomware operators about stopping attacks on healthcare and medical organizations during the pandemic. The DoppelPaymer gang also pledged to stop such attacks, though other ransomware groups such as Ryuk and Sodinokibi/REvil did not respond to Bleeping Computer’s queries.

The Maze gang’s pledge, however, says nothing about attacks on city, state or local governments or public health agencies. The Maze gang also said it will “help commercial organizations as much as possible” during the pandemic by offering “exclusive discounts” on ransoms to both current and future ransomware victims; the cybercriminals said they will provide decryptors and deleted any data published on its website.

A screenshot of the Maze ransomware gang's announcement that it will not attack healthcare organizations during the coronavirus pandemic.
A screenshot of the Maze ransomware gang’s announcement that it will not attack healthcare organizations during the coronavirus pandemic.

Despite the promises of the DoppelPaymer and Maze ransomware gangs, it’s unclear how much control they have over what organizations are attacked. Many outfits use a ransomware-as-a-service model where they develop the malicious code and then sell it to other cybercriminals, which are often called affiliates.

These affiliates then conduct the actual intrusions, data exfiltration and ransomware deployment and pay the authors. Many ransomware incidents are initiated through phishing emails and brute-force attacks on remote desktop protocol instances; threat researchers have said it’s likely that ransomware actors aren’t specifically targeting organizations by name or industry and are merely capitalizing on the most vulnerable networks.

Go to Original Article
Author:

How Genesys is personalizing the customer experience with Engage, Azure and AI | Transform

Microsoft and Genesys, a global provider of contact center software, recently announced a partnership to enable enterprises to run Genesys’ omnichannel customer experience solution, Genesys Engage, on Microsoft Azure. According to the two companies, this combination will provide a secure cloud environment to help companies more easily leverage AI to address customer needs on any channel.

Headquartered in Daly City, California, Genesys has more than 5,000 employees in nearly 60 offices worldwide. Every year, the company supports more than 70 billion customer experiences for organizations like Coca-Cola Business Services North America, eBay, Heineken, Lenovo, PayPal, BOSCH, Quicken and more.

Transform spoke with Barry O’Sullivan, executive vice president and general manager of Multicloud Solutions for Genesys, to explore how technology is reinventing the customer service experience.

TRANSFORM: How are technologies like artificial intelligence (AI), machine learning and cloud transforming the customer service sector?

O’SULLIVAN: It’s broader than customer service. It’s the entire customer experience, which encompasses any point at which businesses engage with consumers, whether it’s in a marketing, sales or service context. What cloud, AI and machine learning enable is the ability to make every experience unique to each individual. Every consumer wants to feel like they’re the only customer that matters during each interaction with a brand. These technologies allow organizations to understand what customers are doing, predict what they will need next and then deliver it in real time.

Traditionally, companies haven’t been able to do that well, because it’s hard to get a fix on a consumer as they move between channels. Maybe they come to a physical store one day, then call the next day or engage via web chat. These technologies allow brands to stitch together every customer interaction, and then use the resulting data to personalize the experience.

TRANSFORM: Can you talk a little bit more about that customer journey and what customers will experience going forward?

O’SULLIVAN: Let’s use contacting the cable company to get internet service as an example. You check out their website, but maybe you get stuck and use web chat to interact with a customer service representative. Today’s technologies allow businesses to connect the dots to better understand the customer.

Before these technologies were available, interactions were disconnected, and important customer details and context didn’t move from one department or agent to the next. We all know what that’s like – just think about a customer service experience when you had to repeat your name and birthdate every time you were passed to a new agent.

Today’s technology can tie together a customer’s details, like their favored communication channel, past purchases, prior service requests and more, so the business really knows them. Then, using AI, it can match that customer with the contact center agent who has the best chance of successfully resolving the issue and achieving a specific business outcome, such as making a related sale.

TRANSFORM: All of those kinds of experiences seem to be present in some form today. Is there a change coming that’s going to take the consumer experience to the next level?

O’SULLIVAN: Personalized service is not a new concept, but very few businesses get it right. Today, it’s about so much more than targeting personas or market segments.

It’s really about enabling organizations to link together their customers’ and employees’ experiences to deliver truly memorable, one-of-a-kind interactions. When it’s done right, organizations already know who the customer is, what he or she wants and the best way to deliver it.

That means understanding customers so well that businesses know the best times to contact them, on which channel and even the best days for an appointment. It’s no longer one-size-fits-all service – it’s tailor-made customer care for each consumer.

TRANSFORM: Are your own customers ready to adopt the technologies to enable this kind of new experience?

O’SULLIVAN: When it comes to cloud, it’s not a question of if, but when and how. And that’s one of the reasons the announcement between Genesys and Microsoft is so exciting. We have a lot of customers, especially large enterprises, who love Genesys and love Azure and really want to see that combination come together. So, giving them that option and that choice is really going to accelerate the migration to cloud.

In terms of adopting AI and machine learning, many companies are in the early phases, but recognize the enormous potential of the technology. What makes AI truly compelling in the customer experience market is its ability to unlock data. Increasingly, businesses use digital channels, like web chat and text, to communicate with consumers, which combined with traditional voice interactions has resulted in copious amounts of data being produced daily. The key for organizations is figuring out how to harness and leverage it to more fully understand customers, their experiences and behaviors, as well as the needs of human agents. That’s where Genesys comes in.

TRANSFORM: How would you describe your experience working with Microsoft?

O’SULLIVAN: It’s a great partnership because we’ve got a common view of the customer and a very aligned vision on cloud. It’s all about delivering agility and innovation quickly and reliably to our joint customers. So, it really helps when we’re both all in on the cloud, all in on customer experience.

Our customers are really excited about this combination of Genesys and Azure. They can simplify their maintenance, reduce costs and streamline the buying process. We believe in the advantages of moving to cloud, and obviously Azure is a leader there.

Go to Original Article
Author: Microsoft News Center

Microsoft announces it will be carbon negative by 2030 – Stories

REDMOND, Wash. — Jan. 16, 2020 — Microsoft Corp. on Thursday announced an ambitious goal and a new plan to reduce and ultimately remove its carbon footprint. By 2030 Microsoft will be carbon negative, and by 2050 Microsoft will remove from the environment all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975.

At an event at its Redmond campus, Microsoft Chief Executive Officer Satya Nadella, President Brad Smith, Chief Financial Officer Amy Hood, and Chief Environmental Officer Lucas Joppa announced the company’s new goals and a detailed plan to become carbon negative.

“While the world will need to reach net zero, those of us who can afford to move faster and go further should do so. That’s why today we are announcing an ambitious goal and a new plan to reduce and ultimately remove Microsoft’s carbon footprint,” said Microsoft President Brad Smith. “By 2030 Microsoft will be carbon negative, and by 2050 Microsoft will remove from the environment all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975.”

The Official Microsoft Blog has more information about the company’s bold goal and detailed plan to remove its carbon footprint: https://blogs.microsoft.com/?p=52558785.

The company announced an aggressive program to cut carbon emissions by more than half by 2030, both for our direct emissions and for our entire supply and value chain. This includes driving down our own direct emissions and emissions related to the energy we use to near zero by the middle of this decade. It also announced a new initiative to use Microsoft technology to help our suppliers and customers around the world reduce their own carbon footprints and a new $1 billion climate innovation fund to accelerate the global development of carbon reduction, capture and removal technologies. Beginning next year, the company will also make carbon reduction an explicit aspect of our procurement processes for our supply chain. A new annual Environmental Sustainability Report will detail Microsoft’s carbon impact and reduction journey. And lastly, the company will use its voice and advocacy to support public policy that will accelerate carbon reduction and removal opportunities.

More information can be found at the Microsoft microsite: https://news.microsoft.com/climate.

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications, (425) 638-7777, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.

Go to Original Article
Author: Microsoft News Center

5/14: Hyper-V HyperClear Update

‎05-14-2019 12:54 PM

Four new speculative execution side channel vulnerabilities were announced today and affect a wide array of Intel processors. The list of affected processors includes Intel Xeon, Intel Core, and Intel Atom models. These vulnerabilities are referred to as CVE-2018-12126 Microarchitectural Store Buffer Data Sampling (MSBDS), CVE-2018-12130 Microarchitectural Fill Buffer Data Sampling (MFBDS), CVE-2018-12127 Microarchitectural Load Port Data Sampling (MLPDS), and CVE-2018-11091 Microarchitectural Data Sampling Uncacheable Memory (MDSUM). These vulnerabilities are like other Intel CPU vulnerabilities disclosed recently in that they can be leveraged for attacks across isolation boundaries. This includes intra-OS attacks as well as inter-VM attacks.

In a previous blog post, the Hyper-V hypervisor engineering team described our high-performing and comprehensive side channel vulnerability mitigation architecture, HyperClear. We originally designed HyperClear as a defense against the L1 Terminal Fault (a.k.a. Foreshadow) Intel side channel vulnerability. Fortunately for us and for our customers, HyperClear has proven to be an excellent foundation for mitigating this new set of side channel vulnerabilities. In fact, HyperClear required a relatively small set of updates to provide strong inter-VM and intra-OS protections for our customers. These updates have been deployed to Azure and are available in Windows Server 2016 and later supported releases of Windows and Windows Server. Just as before, the HyperClear mitigation allows for safe use of hyper-threading in a multi-tenant virtual machine hosting environment.

We have already shared the technical details of HyperClear and the set of required changes to mitigate this new set of hardware vulnerabilities with industry partners. However, we know that many of our customers are also interested to know how we’ve extended the Hyper-V HyperClear architecture to provide protections against these vulnerabilities.

As we described in the original HyperClear blog post, HyperClear relies on 3 main components to ensure strong inter-VM isolation:

  1. Core Scheduler
  2. Virtual-Processor Address Space Isolation
  3. Sensitive Data Scrubbing

As we extended HyperClear to mitigate these new vulnerabilities, the fundamental components of the architecture remained constant. However, there were two primary hypervisor changes required:

  1. Support for a new Intel processor feature called MbClear. Intel has been working to add support for MbClear by updating the CPU microcode for affected Intel hardware. The Hyper-V hypervisor uses this new feature to clear microarchitectural buffers when switching between virtual processors that belong to different virtual machines. This ensures that when a new virtual processor begins to execute, there is no data remaining in any microarchitectural buffers that belongs to a previously running virtual processor. Additionally, this new processor feature may be exposed to guest operating systems to implement intra-OS mitigations.
  2. Always-enabled sensitive data scrubbing. This ensures that the hypervisor never leaves sensitive data in hypervisor-owned memory when it returns to guest kernel-mode or guest user-mode. This prevents the hypervisor from being used as a gadget by guest user-mode. Without always-enabled sensitive data scrubbing, the concern would be that guest user-mode can deliberately trigger hypervisor entry and that the CPU may speculatively fill a microarchitectural buffer with secrets remaining in memory from a previous hypervisor entry triggered by guest kernel-mode or a different guest user-mode application. Always-enabled sensitive data scrubbing fully mitigates this concern. As a bonus, this change improves performance on many Intel processors because it enables the Hyper-V hypervisor to more efficiently mitigate other previously disclosed Intel side channel speculation vulnerabilities.

Overall, the Hyper-V HyperClear architecture has proven to be a readily extensible design providing strong isolation boundaries against a variety of speculative execution side channel attacks with negligible impact on performance.

Go to Original Article
Author: brucesherwin

How to Use Azure Arc for Hybrid Cloud Management and Security

Azure Arc is a new hybrid cloud management option announced by Microsoft in November of 2019. This article serves as a single point of reference for all things Azure Arc.

According to Microsoft’s CEO Satya Nadella, “Azure Arc really marks the beginning of this new era of hybrid computing where there is a control plane built for multi-cloud, multi-edge” (Microsoft Ignite 2019 Keynote at 14:40). That is a strong statement from one of the industry leaders in cloud computing, especially since hybrid cloud computing has already been around for a decade. Essentially Azure Arc allows organizations to use Azure’s management technologies (“control plane”) to centrally administer public cloud resources along with on-premises servers, virtual machines, and containers. Since Microsoft Azure already manages distributed resources at scale, Microsoft is empowering its users to utilize these same features for all of their hardware, including edge servers. All of Azure’s AI, automation, compliance and security best practices are now available to manage all of their distributed cloud resources, and their underlying infrastructure, which is known as “connected machines.” Additionally, several of Azure’s AI and data services can now be deployed on-premises and centrally managed through Azure Arc, enhancing local and offline management and offering greater data sovereignty. Again, this article will provide an overview of the Azure Arc technology and its key capabilities (currently in Public Preview) and will be updated over time.

Video Preview of Azure Arc

Contents

Getting Started with Azure Arc

Azure Services with Azure Arc

Azure Artificial Intelligence (AI) with Azure Arc

Azure Automation with Azure Arc

Azure Cost Management & Billing with Azure Arc

Azure Data Services with Azure Arc

Cloud Availability with Azure Arc

Azure Availability & Resiliency with Azure Arc

Azure Backup & Restore with Azure Arc

Azure Site Recovery & Geo-Replication with Azure Arc

Cloud Management with Azure Arc

Management Tools with Azure Arc

Managing Legacy Hardware with Azure Arc

Offline Management with Azure Arc

Always Up-To-Date Tools with Azure Arc

Cloud Security & Compliance with Azure Arc

Azure Key Vault with Azure Arc

Azure Monitor with Azure Arc

Azure Policy with Azure Arc

Azure Security Center with Azure Arc

Azure Advanced Threat Protection with Azure Arc

Azure Update Management with Azure Arc

Role-Based Access Control (RBAC) with Azure Arc

DevOps and Application Management with Azure Arc

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Other DevOps Tools with Azure Arc

DevOps On-Premises with Azure Arc

Elastic Scalability & Rapid Deployment with Azure Arc

Hybrid Cloud Integration with Azure Arc

Azure Stack Hub with Azure Arc

Azure Stack Edge with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Third-Party Integration with Azure Arc

Amazon Web Services (AWS) Integration with Azure Arc

Google Cloud Platform (GCP) Integration with Azure Arc

IBM Kubernetes Service Integration with Azure Arc

Linux VM Integration with Azure Arc

VMware Cloud Solution Integration with Azure Arc

Getting Started with Azure Arc

The Azure Arc public preview was announced in November 2019 at the Microsoft Ignite conference to much fanfare. In its initial release, many fundamental Azure services are supported along with Azure Data Services. Over time, it is expected that a majority of Azure Services will be supported by Azure Arc.

To get started with Azure Arc, check out the following guides and documentation provided by Microsoft.

Additional information will be added once it is made available.

Azure Services with Azure Arc

One of the fundamental benefits of Azure Arc is the ability to bring Azure services to a customer’s own datacenter. In its initial release, Azure Arc includes services for AI, automation, availability, billing, data, DevOps, Kubernetes management, security, and compliance. Over time, additional Azure services will be available through Azure Arc.

Azure Artificial Intelligence (AI) with Azure Arc

Azure Arc leverages Microsoft Azure’s artificial intelligence (AI) services, to power some of its advanced decision-making abilities learned from managing millions of devices at scale. Since Azure AI is continually monitoring billions of endpoints, it is able to perform tasks that can only be achieved at scale, such as identifying an emerging malware attack. Azure AI improves security, compliance, scalability and more for all cloud resources managed by Azure Arc. The services which run Azure AI are hosted in Microsoft Azure, and in disconnected environments, much of the AI processing can run on local servers using Azure Stack Edge.

For more information about Azure AI visit https://azure.microsoft.com/en-us/overview/ai-platform.

Azure Automation with Azure Arc

Azure Automation is a service provided by Azure that automates repetitive tasks which can be time-consuming or error-prone. This saves the organization significant time and money while helping them maintain operational consistency. Custom automation scripts can get triggered by a schedule or an event to automate servicing, track changes, collect inventory and much more. Since Azure Automation uses PowerShell, Python, and graphical runbooks, it can manage diverse software and hardware that supports PowerShell or has APIs. With Azure Arc, any on-premises connected machines and the applications they host can be integrated and automated with any Azure Automation workflow. These workflows can also be run locally on disconnected machines.

For more information about Azure Automation visit https://azure.microsoft.com/en-in/services/automation.

Azure Cost Management & Billing with Azure Arc

Microsoft Azure and other cloud providers use a consumption-based billing model so that tenants only pay for the resources which they consume. Azure Cost Management and Billing provides granular information to understand how cloud storage, network, memory, CPUs and any Azure services are being used. Organizations can set thresholds and get alerts when any consumer or business unit approaches or exceeds their limits. With Azure Arc, organizations can use cloud billing to optimize and manage costs for their on-premises resources also. In addition to Microsoft Azure and Microsoft hybrid cloud workloads, all Amazon AWS spending can be integrated into the same dashboard.

For more information about Azure Cost Management and Billing visit https://azure.microsoft.com/en-us/services/cost-management.

Azure Data Services with Azure Arc

Azure Data Services is the first major service provided by Azure Arc for on-premises servers. This was the top request of many organizations which want the management capabilities of Microsoft Azure, yet need to keep their data on-premises for data sovereignty. This makes Azure Data Services accessible to companies that must keep their customer’s data onsite, such as those working within regulated industries or those which do not have an Azure datacenter within their country.

In the initial release, both Azure SQL Database and Azure Database for PostgreSQL Hyperscale will be available for on-premises deployments. Now organizations can run and offer database as a service (DBaaS) as a platform as a service (PaaS) offering to their tenants. This makes it easier for users to deploy and manage cloud databases on their own infrastructure, without the overhead of setting up and maintaining the infrastructure on a physical server or virtual machine. The Azure Data Services on Azure Arc still require an underlying Kubernetes cluster, but many management frameworks are supported by Microsoft Azure and Azure Arc.

All of the other Azure Arc benefits are included with the data services, such as automation, backup, monitoring, scaling, security, patching and cost management. Additionally, Azure Data Services can run on both connected and disconnected machines. The latest features and updates to the data services are automatically pushed down from Microsoft Azure to Azure Arc members so that the infrastructure is always current and consistent.

For more information about Azure Data Services with Azure Arc visit https://azure.microsoft.com/en-us/services/azure-arc/hybrid-data-services.

Cloud Availability with Azure Arc

One of the main advantages offered by Microsoft Azure is access to its unlimited hardware spread across multiple datacenters which provide business continuity. This gives Azure customers numerous ways to increase service availability, retain more backups, and gain disaster recovery capabilities. With the introduction of Azure Arc, these features provide even greater integration between on-premises servers and Microsoft Azure.

Azure Availability & Resiliency with Azure Arc

With Azure Arc, organizations can leverage Azure’s availability and resiliency features for their on-premises servers. Virtual Machine Scale Sets allow automatic application scaling by rapidly deploying dozens (or thousands) of VMs to quickly increase the processing capabilities of a cloud application. Integrated load-balancing will distribute network traffic, and redundancy is built into the infrastructure to eliminate single points of failure. VM Availability Sets give administrators the ability to select a group of related VMs and force them to distribute themselves across different physical servers. This is recommended for redundant servers or guest clusters where it is important to have each virtualized instanced spread out so that the loss of a single host will not take down an entire service. Azure Availability Zones extend this concept across multiple datacenters by letting organizations deploy datacenter-wide protection schemes that distribute applications and their data across multiple sites. Azure’s automated updating solutions are availability-aware so they will keep services online during a patching cycle, serially updating and rebooting a subset of hosts. Azure Arc helps hybrid cloud services take advantage of all of the Azure resiliency features.

For more information about Azure availability and resiliency visit https://azure.microsoft.com/en-us/features/resiliency.

Azure Backup & Restore with Azure Arc

Many organizations limit their backup plans because of their storage constraints since it can be costly to store large amounts of data which may not need to be accessed again. Azure Backup helps organizations by allowing their data to be backed up and stored on Microsoft Azure. This usually reduces costs as users are only paying for the storage capacity they are using. Additionally storing backups offsite helps minimize data loss as offsite backups provide resiliency to site-wide outages and can protect customers from ransomware. Azure Backup also offers compression, encryption and retention policies to help organizations in regulated industries. Azure Arc manages the backups and recovery of on-premises servers with Microsoft Azure, with the backups being stored in the customer’s own datacenter or in Microsoft Azure.

For more information about Azure Backup visit https://azure.microsoft.com/en-us/services/backup.

Azure Site Recovery & Geo-Replication with Azure Arc

One of the more popular hybrid cloud features enabled with Microsoft Azure is the ability to replicate data from an on-premises location to Microsoft Azure using Azure Site Recovery (ASR). This allows users to have a disaster recovery site without needing to have a second datacenter. ASR is easy to deploy, configure, operate and even tests disaster recovery plans. Using Azure Arc it is possible to set up geo-replication to move data and services from a managed datacenter running Windows Server Hyper-V, VMware vCenter or Amazon Web Services (AWS) public cloud. Destination datacenters can include other datacenters managed by Azure Arc, Microsoft Azure and Amazon AWS.

For more information about Azure Site Recovery visit https://azure.microsoft.com/en-us/services/site-recovery.

Cloud Management with Azure Arc

Azure Arc introduces some on-premises management benefits which were previously available only in Microsoft Azure. These help organizations administer legacy hardware and disconnected machines with Azure-consistent features using multiple management tools.

Management Tools with Azure Arc

One of the fundamental design concepts of Microsoft Azure is to have centralized management layers (“planes”) that support diverse hardware, data, and administrative tools. The fabric plane controls the hardware through a standard set of interfaces and APIs. The data plane allows unified management of structured and unstructured data. And the control plane offers centralized management through various interfaces, including the GUI-based Azure Portal, Azure PowerShell, and other APIs. Each of these layers interfaces with each other through a standard set of controls, so that the operational steps will be identical whether a user deploys a VM via the Azure Portal or via Azure PowerShell. Azure Arc can manage cloud resources with the following Azure Developer Tools:

At the time of this writing, the documentation for Azure Arc is not yet available, but some examples can be found in the quick start guides which are linked in the Getting Started with Azure Arc section.

Managing Legacy Hardware with Azure Arc

Azure Arc is hardware-agnostic, allowing Azure to manage a customer’s diverse or legacy hardware just like an Azure datacenter server. The hardware must meet certain requirements so that a virtualized Kubernetes cluster can be deployed on it, as Azure services run on this virtualized infrastructure. In the Public Preview, servers must be running Windows Server 2012 R2 (or newer) or Ubuntu 16.04 or 18.04. Over time, additional servers will be supported, with rumors of 32-bit (x86), Oracle and Linux hosts being supported as infrastructure servers.

Offline Management with Azure Arc

Azure Arc will even be able to manage servers that are not regularly connected to the Internet, as is common with the military, emergency services, and sea vessels. Azure Arc has a concept of “connected” and “disconnected” machines. Connected servers have an Azure Resource ID and are part of an Azure Resource group. If a server does not sync with Microsoft Azure every 5 minutes, it is considered disconnected yet it can continue to run its local resources. Microsoft Arc allows these organizations to use the latest Azure services when they are connected, yet still use many of these features if the servers do not maintain an active connection, including Azure Data Services. Even some services which run Azure AI and are hosted in Microsoft Azure can work disconnected environments while running on Azure Stack Edge.

Always Up-To-Date Tools with Azure Arc

One of the advantages of using Microsoft Azure is that all the services are kept current by Microsoft. The latest features, best practices, and AI learning are automatically available to all users in real-time as soon as they are released. When an admin logs into the Azure Portal through a web browser, they are immediately exposed to the latest technology to manage their distributed infrastructure. By ensuring that all users have the same management interface and APIs, Microsoft can guarantee consistency of behavior for all users across all hardware, including on-premises infrastructure when using Azure Arc. However, if the hardware is in a disconnected environment (such as on a sea vessel), there could be some configuration drift as older versions of Azure data services and Azure management tools may still be used until they are reconnected and synced.

Cloud Security & Compliance with Azure Arc

Public cloud services like Microsoft Azure are able to offer industry-leading security and compliance due to their scale and expertise. Microsoft employs more than 3,500 of the world’s leading security engineers who have been collaborating for decades to build the industry’s safest infrastructure. Through its billions of endpoints, Microsoft Azure leverages Azure AI to identify anomalies and detect threats before they become widespread. Azure Arc extends all of the security features offered in Microsoft Azure to on-premises infrastructure, including key vaults, monitoring, policies, security, threat protection, and update management.

Azure Key Vault with Azure Arc

When working in a distributed computing environment, managing credentials, passwords, and user access can become complicated. Azure Key Vault is a service that helps enhance data protection and compliance by securely protecting all keys and monitoring access. Azure Key Vault is supported by Azure Arc, allowing credentials for on-premises services and hybrid clouds to be centrally managed through Azure.

For more information about Azure Key Vault visit https://azure.microsoft.com/en-us/services/key-vault.

Azure Monitor with Azure Arc

Azure Monitor is a service that collects and analyzes telemetry data from Azure infrastructure, networks, and applications. The logs from managed services are sent to Azure Monitor where they are aggregated and analyzed. If a problem is identified, such as an offline server, it can trigger alerts or use Azure Automation to launch recovery workflows. Azure Arc can now monitor on-premises servers, networks, virtualization infrastructure, and applications, just like they were running in Azure. It even leverages Azure AI and Azure Automation to make recommendations and fixes to hybrid cloud infrastructure.

For more information about Azure Monitor visit https://azure.microsoft.com/en-us/services/monitor.

Azure Policy with Azure Arc

Most enterprises have certain compliance requirements for the IT infrastructure, especially those organizations within regulated industries. Azure Policy uses Microsoft Azure to audit an environment and aggregate all the compliance data into a single location. Administrators can get alerted about misconfigurations or configuration drifts and even trigger automated remediation using Azure Automation. Azure Policy can be used with Azure Arc to apply policies on all connect machines, providing the benefits of cloud compliance to on-premises infrastructure.

For more information about Azure Policy visit https://azure.microsoft.com/en-us/services/azure-policy.

Azure Security Center with Azure Arc

The Azure Security Center centralizes all security policies and protects the entire managed environment. When Security Center is enabled, the Azure monitoring agents will report data back from the servers, networks, virtual machines, databases, and applications. The Azure Security Center analytics engines will ingest the data and use AI to provide guidance. It will recommend a broad set of improvements to enhance security, such as closing unnecessary ports or encrypting disks. Perhaps most importantly it will scan all the managed servers and identify updates that are missing, and it can use Azure Automation and Azure Update Management to patch those vulnerabilities. Azure Arc extends these security features to connected machines and services to protect all registered resources.

For more information about Azure Security Center visit https://azure.microsoft.com/en-us/services/security-center

Azure Advanced Threat Protection with Azure Arc

Azure Advanced Threat Protection (ATP) helps the industry’s leading cloud security solution by looking for anomalies and potential attacks with Azure AI. Azure ATP will look for suspicious computer or user activities and report any alerts in real-time. Azure Arc lets organizations extend this cloud protect to their hybrid and on-premises infrastructure offering leading threat protection across all of their cloud resources.

For more information about Azure Advanced Threat Protection visit https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection.

Azure Update Management with Azure Arc

Microsoft Azure automates the process of applying patches, updates and security hotfixes to the cloud resources it manages. With Update Management, a series of updates can be scheduled and deployed on non-compliant servers using Azure Automation. Update management is aware of clusters and availability sets, ensuring that a distributed workload remains online while its infrastructure is patched by live migrating running VMs or containers between hosts. Azure will centrally manage updates, assessment reports, deployment results, and can create alerts for failures or other conditions. Organizations can use Azure Arc to automatically analyze and patch their on-premises and connected servers, virtual machines, and applications.

For more information about Azure Update Management visit https://docs.microsoft.com/en-us/azure/automation/automation-update-management.

Role-Based Access Control (RBAC) with Azure Arc

Controlling access to different resources is a critical function for any organization to enforce security and compliance. Microsoft Azure Active Directory (Azure AD) allows its customers to define granular access control for every user or user role based on different types of permissions (read, modify, delete, copy, sharing, etc.). There are also over 70 user roles provided by Azure, such as a Global Administrator, Virtual Machine Contributor or Billing Administrator. Azure Arc lets businesses extend role-based access control (RBAC) managed by Azure to on-premises environments. This means that any groups, policies, settings, security principals and managed identities that were deployed by Azure AD can now access all managed cloud resources. Azure AD also provides auditing so it is easy to track any changes made by users or security principals across the hybrid cloud.

For more information about Role-Based Access Control visit https://docs.microsoft.com/en-us/azure/role-based-access-control/overview.

DevOps and Application Management with Azure Arc

Over the past few years, containers have become more commonplace as they provide certain advantages over VMs, allowing the virtualized applications and services to be abstracted from their underlying virtualized infrastructure. This means that containerized applications can be uniformly deployed anywhere with any tools so that users do not have to worry about the hardware configuration. This technology has become popular amongst application developers, enabling them to manage their entire application development lifecycle without having a dependency on the IT department to set up the physical or virtual infrastructure. This development methodology is often called DevOps. One of the key design requirements with Azure Arc was to make it hardware agnostic, so with Azure Arc, developers can manage their containerized applications the same way whether they are running in Azure, on-premises or in a hybrid configuration.

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Kubernetes is a management tool that allows developers to deploy, manage and update their containers. Azure Kubernetes Service (AKS) is Microsoft’s Kubernetes service and this can be integrated with Azure Arc. This means that AKS can be used to manage on-premises servers running containers. In additional to Azure Kubernetes Service, Azure Arc can be integrated with other Kubernetes management platforms, including Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service.

For more information about Azure Container Services visit https://azure.microsoft.com/en-us/product-categories/containers and for Azure Kubernetes Services (AKS) visit https://azure.microsoft.com/en-us/services/kubernetes-service.

Other DevOps Tools with Azure Arc

For container management on Azure Arc developers can use any of the common Kubernetes management platforms, including Azure Kubernetes Service, Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service. All standard deployment and management operations are supported on Azure Arc hardware enabling cross-cloud management.

More information about the non-Azure management tools is provided on the section on Third-Party Management Tools.

DevOps On-Premises with Azure Arc

Many developers prefer to work on their own hardware and some are required to develop applications in a private environment to keep their data secure. Azure Arc allows developers to build and deploy their applications anywhere utilizing Azure’s cloud-based AI, security and other cloud features while retaining their data, IP or other valuable assets within their own private cloud. Additionally, Azure Active Directory can use role-based access control (RBAC) and Azure Policies to manage developer access to sensitive company resources.

Elastic Scalability & Rapid Deployment with Azure Arc

Containerized applications are designed to start quickly when running on a highly-available Kubernetes cluster. The app will bypass the underlying operating system, allowing it to be rapidly deployed and scaled. These applications can quickly grow to an unlimited capacity when deployed on Microsoft Azure. When using Azure Arc, the applications can be managed across public and private clouds. Applications will usually contain several containers types that can be deployed in different locations based on their requirements. A common deployment configuration for a two-tiered application is to deploy the web frontend on Microsoft Azure for scalability and the database in a secure private cloud backend.

Hybrid Cloud Integration with Azure Arc

Microsoft’s hybrid cloud initiatives over the past few years have included certifying on-premises software and hardware configurations known as Azure Stack. Azure Stack allows organizations to run Azure-like services on their own hardware in their datacenter. It allows organizations that may be restricted from using public cloud services to utilize the best parts of Azure within their own datacenter. Azure Stack is most commonly deployed by organizations that have requirements to keep their customer’s datacenter inhouse (or within their territory) for data sovereignty, making it popular for customers who could not adopt the Microsoft Azure public cloud. Azure Arc easily integrates with Azure Stack Hub, Azure Stack Edge, and all the Azure Stack HCI configurations, allowing these services to be managed from Azure.

For more information about Azure Stack visit https://azure.microsoft.com/en-us/overview/azure-stack.

Azure Stack Hub with Azure Arc

Azure Stack Hub (formerly Microsoft Azure Stack) offers organizations a way to run Azure services from their own datacenter, from a service provider’s site, or from within an isolated environment. This cloud platform allows users to deploy Windows VMs, Linux VMs and Kubernetes containers on hardware which they operate. This offering is popular with developers who want to run services locally, organizations which need to retain their customer’s data onsite, and groups which are regularly disconnected from the Internet, as is common with sea vessels or emergency response personnel. Azure Arc allows Azure Stack Hub nodes to run supported Azure services (like Azure Data Services) while being centrally managed and optimized via Azure. These applications can be distributed across public, private or hybrid clouds.

For more information about Azure Stack Hub visit https://docs.microsoft.com/en-us/azure-stack/user/?view=azs-1908.

Azure Stack Edge with Azure Arc

Azure Stack Edge (previously Azure Data Box Edge) is a virtual appliance which can run on any hardware in a datacenter, branch office, remote site or disconnected environment. It is designed to run edge computing workloads on Hyper-V VMs, VMware VMs, containers and Azure services. These edge servers will be optimized run IoT, AI and business workloads so that processing can happen onsite, rather than being sent across a network to a cloud datacenter for processing. When the Azure Stack Edge appliance is (re)connected to the network it transfers any data at high-speed, and data use can be optimized to run during off-hours. It supports machine learning capabilities through GPGA or GPU. Azure Arc can centrally manage Azure Stack Edge, its virtual appliances and physical hardware.

For more information about Azure Stack Edge visit https://azure.microsoft.com/en-us/services/databox/edge.

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) is a program which provides preconfigured hyperconverged hardware from validated OEM partners which are optimized to run Azure Stack. For businesses which want to run Azure-like services on-premises they can purchase or rent hardware which has been standardized to Microsoft’s requirements. VMs, containers, Azure services, AI, IOT and more can run consistency on the Microsoft Azure public cloud or Azure Stack HCI hardware in a datacenter. Cloud services can be distributed across multiple datacenters or clouds and centrally managed using Azure Arc.

For more information about Azure Stack HCI visit https://azure.microsoft.com/en-us/overview/azure-stack/hci.

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Azure Lighthouse is a technology designed for managed service providers (MSPs), ISVs or distributed organizations which need to centrally manage their tenants’ resources. Azure Lighthouse allows service providers and tenants to create a two-way trust to allow unified management of cloud resources. Tenants will grant specific permissions for approved user roles on particular cloud resources, so that they can offload the management to their service provider. Now service providers can add their tenants’ private cloud environments under Azure Arc management, so that they can take advantage of the new capabilities which Azure Arc provides.

For more information about Azure Lighthouse visit https://azure.microsoft.com/en-us/services/azure-lighthouse or on the Altaro MSP Dojo.

Third-Party Integration with Azure Arc

Within the Azure management layer (control plane) exists Azure Resource Manager (ARM). ARM provides a way to easily create, manage, monitor and delete any Azure resource. Every native and third-party Azure resource uses ARM to ensure that it can be centrally managed through Azure management tools. Azure Arc now allows non-Azure resources to be managed by Azure. This can include third-party clouds (Amazon Web Services, Google Cloud Platform), Windows and Linux VMs, VMs on non-Microsoft hypervisors (VMware vSphere, Google Compute Engine, Amazon EC2), Kubernetes containers and clusters (including IMB Kubernetes Service, Google Kubernetes Engine and Amazon EKS). At the time of this writing limited information is available about third-party integration, but it will be added over time.

Amazon Web Services (AWS) Integration with Azure Arc

Amazon Web Services (AWS) is Amazon’s public cloud platform. Some services from AWS can be managed by Azure Arc. This includes operating virtual machines running on the Amazon Elastic Compute Cloud (EC2) and containers running on Amazon Elastic Kubernetes Service (EKS). Azure Arc also lets an AWS site be used as a geo-replicated disaster recovery location. AWS billing can also be integrated with Azure Cost Management & Billing so that expenses from both cloud providers can be viewed in a single location.

Additional information will be added once it is made available.

Google Cloud Platform (GCP) Integration with Azure Arc

Google Cloud Platform (GCP) is Google’s public cloud platform. Some services from GCP can be managed by Azure Arc. This includes operating virtual machines running on Google Compute Engine (GCE) and containers running on Google Kubernetes Engine (GKE).

Additional information will be added once it is made available.

IBM Kubernetes Service Integration with Azure Arc

IBM Cloud is IBM’s public cloud platform. Some services from IBM Cloud can be managed by Azure Arc. This includes operating containers running on IBM Kubernetes Service (Kube).

Additional information will be added once it is made available.

Linux VM Integration with Azure Arc

In 2014 Microsoft’s CEO Satya Nadella declared, “Microsoft loves Linux”. Since then the company has embraced Linux integration, making Linux a first-class citizen in its ecosystem. Microsoft even contributes code to the Linux kernel so that it operates efficiently when running as a VM or container on Microsoft’s operating systems. Virtually all management features for Windows VMs are available to supported Linux distributions, and this extends to Azure Arc. Azure Arc admins can use Azure to centrally create, manage and optimize Linux VMs running on-premises, just like any standard Windows VM.

VMware Cloud Solution Integration with Azure Arc

VMware offers a popular virtualization platform and management studio (vSphere) which runs on VMware’s hypervisor. Microsoft has acknowledged that many customers are running legacy on-premises hardware are using VMware, so they provide numerous integration points to Azure and Azure Arc. Organizations can even virtualize and deploy their entire VMware infrastructure on Azure, rather than in their own datacenter. Microsoft makes it easy to deploy, manage, monitor and migrate VMware system and with Azure Arc businesses can now centrally operate their on-premises VMware infrastructure too. While the full management functionality of VMware vSphere is not available through Azure Arc, most standard operations are supported.

For more information about VMware Management with Azure visit https://azure.microsoft.com/en-us/overview/azure-vmware.


Go to Original Article
Author: Symon Perriman

McAfee launches security tool Mvision Cloud for Containers

Cybersecurity company McAfee on Tuesday announced McAfee Mvision Cloud for Containers, a product intended to help organizations ensure security and compliance of their cloud container workloads.

Mvision Cloud for Containers integrates container security with McAfee’s cloud access security broker (CASB) and cloud security posture management (CSPM) tools, according to the company.

“Data could … move between SaaS offerings, IaaS custom apps in various CPSs, containers and hybrid clouds. We want security to be consistent and predictable across the places data live and workloads are processed. Integrating CASB and CSPM allows McAfee to provide consistent configuration policies and DLP/malware scanning that does not restrict the flexibility of the cloud,” said John Dodds, a director of product management at McAfee.

According to Andras Cser, vice president and principal analyst for security and risk management at Forrester, when it comes to evaluating a product like Mvision, it’s worth looking at factors such as “price, cost of integration, level of integration between acquired components and coverage of the client’s applications.”

Mvision Cloud uses the zero-trust model application visibility and control capabilities by container security startup NanoSec for container-based deployments in the cloud. McAfee acquired NanoSec in September in a move to expand its container cloud security offerings.

Mvision Cloud for Containers builds on the existing McAfee Mvision Cloud platform, integrating cloud security posture management and vulnerability scanning for container workloads so that security policies can be implemented across different forms of cloud IaaS workloads, according to the company.

Other features of McAfee Mvision Cloud for Containers include:

  • Cloud security posture management: Ensures the container platforms run in accordance with Center for Internet Security and other compliance standards by integrating configuration audit checks to container workloads.
  • Container images vulnerability scanning: Identifies weak or exploitable elements in container images to reduce the application’s risk profile.
  • DevOps integration: Ensures compliance and secures container workloads; executes security audits and vulnerability scanning to identify risk and send security incidents and feedback to developers within the build process; and monitors and prevents configuration drift on production deployments of the container workloads.

Go to Original Article
Author:

DOJ takes action against Dridex malware group, Evil Corp

The U.S. and the U.K. announced criminal charges and sanctions against alleged members of the Russian threat group Evil Corp, which is responsible for the Dridex malware.

The U.S. Department of Justice indicted Maksim Yakubets, 32, of Russia on counts of computer hacking and bank fraud. The State Department offered up to $5 million for information leading to the arrest and/or conviction of Yakubets, who is the alleged leader of Evil Corp. Additionally, the DOJ indicted Igor Turashev, 38, in relation to the Dridex banking Trojan.

The Department of Treasury announced sanctions against Evil Corp, which has been active since 2009 and has been connected to the Zeus, Bugat and Dridex malware. According to the Treasury Department announcement, “Evil Corp has used the Dridex malware to infect computers and harvest login credentials from hundreds of banks and financial institutions in over 40 countries, causing more than $100 million in theft.”

Assistant Attorney General Brian Benczkowski of the Justice Department’s criminal division noted in the DOJ press release that the U.K. National Crime Agency (NCA) was “crucial” in efforts to identify Yakubets and other members of Evil Corp.

The DOJ unsealed two indictments — one filed on Nov. 12 in the Western District of Pennsylvania and one filed Nov. 14 in the District of Nebraska. The former indictment named both Yakubets and Turashev in multiple fraud attempts using Dridex malware beginning in Nov. 2011, including an attempted transfer of $999,000 from the Sharon City School District and an attempt to transfer nearly $2.2 million from Penneco Oil. In total, the indictment filed in Pennsylvania included 10 charges of conspiracy, fraud and intentional damage to a computer.

The indictment filed in Nebraska only named Yakubets and listed 21 businesses and local government offices targeted across the country, nine of which were financial institutions, and covered incidents dating back to 2009. 

According to the DOJ, Yakubets went by the handle “aqua” online. A case from the District of Nebraska charged a John Doe “also known as ‘aqua'” and resulted in the extradition of two Ukrainian nationals from the U.K. to the U.S. in 2014. Those Ukrainians had previously been convicted in the U.K of laundering money for Evil Corp.

The Treasury Department said that its sanctions target “17 individuals and seven entities to include Evil Corp, its core cyber operators, multiple businesses associated with a group member, and financial facilitators utilized by the group.” The announcement went on to name Denis Gusev as a senior member of Evil Corp, as well as entities owned or controlled by Gusev, six other members of the group and eight known financial facilitators.

Previous attempts

These actions are not the first taken against Dridex malware threat actors. In October 2015, the DOJ indicted Andrey Ghinkul in connection with spreading the malware. Ghinkul was arrested in August 2015 in Cyprus and extradited to the U.S. in February 2016.

At the time, Brad Duncan, security researcher at Rackspace, noted that Dridex incidents had disappeared in September following Ghinkul’s arrest, but new instances of the malware began appearing again before the DOJ announced the indictment.

In October 2015, both the FBI and NCA set up sinkholes in efforts to stop the malware from connecting to command and control servers. But by January 2016, IBM security researchers confirmed a new version of Dridex malware was targeting banks in the U.K.

Earlier this year, Chronicle released the results of a five-year study into crimeware, which included looking at arrests made in connection with Zeus and Dridex malware, and found that law enforcement takedown attempts had only short-lived impacts if the masterminds behind such crimeware were not apprehended.

Go to Original Article
Author:

Microsoft extends Office 365 benefit to nonprofit volunteers

Microsoft on Thursday announced a new Office 365 benefit, offering enterprise-sized nonprofit customers free additional Office 365 F1 seats for their volunteers.

The new Office 365 benefit enables nonprofit customers who have Enterprise Agreements with Microsoft to receive 10 free Office 365 F1 seats for their volunteers per licensed Microsoft 365 E3 or E5 seat. Office 365 F1 includes applications for email, calendars, team collaboration, messaging, intranet, file storage and sharing. Nonprofits with 250 or more users in their organization are eligible for the Enterprise Agreement. The offer starts Jan. 1, according to the company.

Microsoft Cloud Solution Providers will be able to offer the Volunteer Use Benefit to customers directly via the Cloud Solution Provider Channel in spring 2020, according to the company.

The news comes after the company’s announcement in September offering nonprofits free Microsoft 365 Business for up to 10 users. Thursday’s announcement extended this Office 365 benefit to nonprofit volunteers.

This is not the first time Microsoft has donated or provided services for free. Some of their collaboration software programs, such as Exchange, OneDrive, SharePoint and Teams, are available to qualified nonprofits. “But it does mark a significant expansion of access for nonprofits who already pay for Office 365. Keep in mind that Microsoft has long had steep discounts for students and educators, as well,” said Nicole France, principal analyst and vice president at Constellation Research.

The recent move is motivated by several factors, she said. “One is certainly ‘keeping up with the Joneses’ or Salesforces, as the case may be, in terms of publicizing and extending support for the nonprofit sector,” France said.

Another factor has to do with the way Microsoft wants to be perceived by current and potential employees, especially millennials, France said. “We know that this demographic group in particular — an increasingly important one, in terms of recruiting and retention — is strongly motivated by an employer’s mission in the world, not just its commercial business. I suspect this is a significant part of the rationale for giving the nonprofit sector some additional love and attention.”

Lastly, she said, the offering addresses the pressing need for nonprofits to provide appropriate tools to their large numbers of volunteers.

Go to Original Article
Author:

4 SD-WAN vendors integrate with AWS Transit Gateway

Several software-defined WAN vendors have announced integration with Amazon Web Services’ Transit Gateway. For SD-WAN users, the integrations promise simplified management of policies governing connectivity among private data centers, branch offices and AWS virtual networks.

Stitching together workloads across cloud and corporate networks is complex and challenging. AWS tackles the problem by making AWS Transit Gateway the central router of all traffic emanating from connected networks.

Cisco, Citrix Systems, Silver Peak and Aruba, a Hewlett Packard Enterprise Company, launched integrations with the gateway this week. The announcements came after AWS unveiled the AWS Transit Gateway at its re:Invent conference in Las Vegas.

SD-WAN vendors lining up quickly to support the latest AWS integration tool didn’t surprise analysts. “The ease and speed of integration with leading IaaS platforms are key competitive issues for SD-WAN for 2020,” said Lee Doyle, the principal analyst for Doyle Research.

By acting as the network hub, Transit Gateway reduces operational costs by simplifying network management, according to AWS. Before the new service, companies had to make individual connections between networks outside of AWS and those serving applications inside the cloud provider.

The potential benefits of Transit Gateway made connecting to it a must-have for SD-WAN suppliers. However, tech buyers should pay close attention to how each vendor configures its integration.

“SD-WAN vendors have different ways of doing things, and that leads to some solutions being better than others,” Doyle said.

What the 4 vendors are offering

Cisco said its integration would let IT teams use the company’s vManage SD-WAN controller to administer connectivity from branch offices to AWS. As a result, engineers will be able to apply network segmentation and data security policies universally through the Transit Gateway.

Aruba will let customers monitor and manage connectivity either through the Transit Gateway or Aruba Central. The latter is a cloud-based console used to control an Aruba-powered wireless LAN.

Silver Peak is providing integration between the Unity EdgeConnect SD-WAN platform and Transit Gateway. The link will make the latter the central control point for connectivity.

Finally, Citrix’s Transit Gateway integration would let its SD-WAN orchestration service connect branch offices and data centers to AWS. The connections will be particularly helpful to organizations running Citrix’s virtual desktops and associated apps on AWS.

Go to Original Article
Author:

Microsoft Open Data Project adopts new data use agreement for datasets

Datasets compilation for Open Data

Last summer we announced Microsoft Research Open Data—an Azure-based repository-as-a-service for sharing datasets—to encourage the reproducibility of research and make research data assets readily available in the cloud. Among other things, the project started a conversation between the community and Microsoft’s legal team about dataset licensing. Inspired by these conversations, our legal team developed a set of brand new data use agreements and released them for public comment on Github earlier this year.

Today we’re excited to announce that Microsoft Research Open Data will be adopting these data use agreements for several datasets that we offer.

Diving a bit deeper on the new data use agreements

The Open Use of Data Agreement (O-UDA) is intended for use by an individual or organization that is able to distribute data for unrestricted uses, and for which there is no privacy or confidentiality concern. It is not appropriate for datasets that include any data that might include materials subject to privacy laws (such as the GDPR or HIPAA) or other unlicensed third-party materials. The O-UDA meets the open definition: it does not impose any restriction with respect to the use or modification of data other than ensuring that attribution and limitation of liability information is passed downstream. In the research context, this implies that users of the data need to cite the corresponding publication with which the data is associated. This aids in findability and reusability of data, an important tenet in the FAIR guiding principles for scientific data management and stewardship.

We also recognize that in certain cases, datasets useful for AI and research analysis may not be able to be fully “open” under the O-UDA. For example, they may contain third-party copyrighted materials, such as text snippets or images, from publicly available sources. The law permits their use for research, so following the principle that research data should be “as open as possible, as closed as necessary,” we developed the Computational Use of Data Agreement (C-UDA) to make data available for research while respecting other interests. We will prefer the O-UDA where possible, but we see the C-UDA as a useful tool for ensuring that researchers continue to have access to important and relevant datasets.

Datasets that reflect the goals of our project

The following examples reference datasets that have adopted the Open Use of Data Agreement (O-UDA).

Location data for geo-privacy research

Microsoft researcher John Krumm and collaborators collected GPS data from 21 people who carried a GPS receiver in the Seattle area. Users who provided their data agreed to it being shared as long as certain geographic regions were deleted. This work covers key research on privacy preservation of GPS data as evidenced in the corresponding paper, “Exploring End User Preferences for Location Obfuscation, Location-Based Services, and the Value of Location,” which was accepted at the Twelfth ACM International Conference on Ubiquitous Computing (UbiComp 2010). The paper has been cited 147 times, including for research that builds upon this work to further the field of preservation of geo-privacy for location-based services providers.

Hand gestures data for computer vision

Another example dataset is that of labeled hand images and video clips collected by researchers Eyal Krupka, Kfir Karmon, and others. The research addresses an important computer vision and machine learning problem that deals with developing a hand-gesture-based interface language. The data was recorded using depth cameras and has labels that cover joints and fingertips. The two datasets included are FingersData, which contains 3,500 labeled depth frames of various hand poses, and GestureClips, which contains 140 gesture clips (100 of these contain labeled hand gestures and 40 contain non-gesture activity). The research associated with this dataset is available in the paper “Toward Realistic Hands Gesture Interface: Keeping it Simple for Developers and Machines,” which was published in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.

Question-Answer data for machine reading comprehension

Finally, the FigureQA dataset generated by researchers Samira Ebrahimi Kahou, Adam Atkinson, Adam Trischler, Yoshua Bengio and collaborators, introduces a visual reasoning task for research that is specific to graphical plots and figures. The dataset has 180,000 figures with 1.3 million question-answer pairs in the training set. More details about the dataset are available in the paper “FigureQA: An Annotated Figure Dataset for Visual Reasoning” and corresponding Microsoft Research Blog post. The dataset is pivotal to developing more powerful visual question answering and reasoning models, which potentially improve accuracy of AI systems that are involved in decision making based on charts and graphs.

The data agreements are a part of our larger goals

Microsoft Research Open Data project was conceived from the start to reflect Microsoft Research’s commitment to fostering open science and research and to achieve this without compromising the ethics of collecting and sharing data. Our goal is to make it easier for researchers to maintain provenance of data while having the ability to reference and build upon it.

The addition of the new data agreements to Microsoft Research Open Data’s feature set is an exciting step in furthering our mission.

Acknowledgements: This work would not have been possible without the substantial team effort by — Dave Green, Justin Colannino, Gretchen Deo, Sarah Kim, Emily McReynolds, Mario Madden, Emily Schlesinger, Elaine Peterson, Leila Stevenson, Dave Baskin, and Sergio Loscialo.

Go to Original Article
Author: Microsoft News Center