Tag Archives: Microsoft’s

Azure ExpressRoute updates – New partnerships, monitoring and simplification

Azure ExpressRoute allows enterprise customers to privately and directly connect to Microsoft’s cloud services, providing a more predictable networking experience than traditional internet connections. ExpressRoute is available in 42 peering locations globally and is supported by a large ecosystem of more than 100 connectivity providers. Leading customers use ExpressRoute to connect their on-premises networks to Azure, as a vital part of managing and running their mission critical applications and services.

Cisco to build Azure ExpressRoute practice

As we continue to grow the ExpressRoute experience in Azure, we’ve found our enterprise customers benefit from understanding networking issues that occur in their internal networks with hybrid architectures. These issues can impact their mission-critical workloads running in the cloud.

To help address on-premises issues, which often require deep technical networking expertise, we continue to partner closely with Cisco to provide a better customer networking experience. Working together, we can solve the most challenging networking issues encountered by enterprise customers using Azure ExpressRoute.

Today, Cisco announced an extended partnership with Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute.   We are fully committed to working with Cisco and other partners with deep networking experience to build and expand on their networking practices and help accelerate our customers’ journey to Azure.

Cisco Solution Support provides customers with additional centralized options for support and guidance for Azure ExpressRoute, targeting the customers on premises end of the network.

New monitoring options for ExpressRoute

To provide more visibility into ExpressRoute network traffic, Network Performance Monitor (NPM) for ExpressRoute will be generally available in six regions in mid-February, following a successful preview announced at Microsoft Ignite 2017. NPM enables customers to continuously monitor their ExpressRoute circuits and alert on several key networking metrics including availability, latency, and throughput in addition to providing graphical view of the network topology. 

NPM for ExpressRoute can easily be configured through the Azure portal to quickly start monitoring your connections.

We will continue to enhance the footprint, features and functionality of NPM of ExpressRoute to provide richer monitoring capabilities for ExpressRoute. 

ExpressRoute1

ExpressRoute2

Figure 1: Network Performance Monitor and Endpoint monitoring simplifies ExpressRoute monitoring

Endpoint monitoring for ExpressRoute enables customers to monitor connectivity not only to PaaS services such as Azure Storage but also SaaS services such as Office 365 over ExpressRoute. Customers can continuously measure and alert on the latency, jitter, packet loss and topology of their circuits from any site to PaaS and SaaS services. A new preview of Endpoint Monitoring for ExpressRoute will be available in mid-February.

Simplifying ExpressRoute peering

To further simplify management and configuration of ExpressRoute we have merged public and Microsoft peerings. Now available on Microsoft peering are Azure PaaS services such as Azure Storage and Azure SQL along with Microsoft SaaS services (Dynamics 365 and Office 365). Access to your Azure Virtual Networking remains on private peering.

ExpressRoute with Microsoft peering and private peering

Figure 2: ExpressRoute with Microsoft peering and private peering

ExpressRoute, using BGP, provides Microsoft prefixes to your internal network. Route filters allow you to select the specific Office 365 or Dynamics 365 services (prefixes) accessed via ExpressRoute. You can also select Azure services by region (e.g. Azure US West, Azure Europe North, Azure East Asia). Previously this capability was only available on ExpressRoute Premium. We will be enabling Microsoft peering configuration for standard ExpressRoute circuits in mid-February.

Manage rules

New ExpressRoute locations

ExpressRoute is always configured as a redundant pair of virtual connections across two physical routers. This highly available connection enables us to offer an enterprise-grade SLA. We recommend that customers connect to Microsoft in multiple ExpressRoute locations to meet their Business Continuity and Disaster Recovery (BCDR) requirements. Previously this required customers to have ExpressRoute circuits in two different cities. In select locations we will provide a second ExpressRoute site in a city that already has an ExpressRoute site. A second peering location is now available in Singapore. We will add more ExpressRoute locations within existing cities based on customer demand. We’ll announce more sites in the coming months.

Compliance Manager tool aims to ease security audit process

underlying environment also means they are at Microsoft’s mercy for its answers on regulatory compliance audits. To address this situation and others, Microsoft developed a Compliance Manager tool that provides a real-time risk analysis of the different cloud workloads.

Over the last year, there has been an uptick in security measures in the enterprise. Two compliance regulations that come up frequently are the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR).

For HIPAA, introduced in 1996, the rise in hospital audits by the Office for Civil Rights and data breaches in recent years has many enterprises re-evaluating their security practices around patient data. GDPR is the compliance requirement that starts May 25, 2018, for organizations that handle the data of European Union citizens.

Most organizations that deal with HIPAA, GDPR or any other regulatory compliance know the difficulties associated with tracking results from audits, questionnaires, surveys and other standard operating procedures. The amount of information required to satisfy requests for compliance checklists and security assessments can overwhelm many Exchange administrators.

Regardless of the industry, the IT staff must address regulatory compliance audits; otherwise, the company can face financial and legal penalties. Microsoft released its Compliance Manager tool in November to assist IT in these efforts.

Compliance Manager tool offers compliance overview

Compliance Manager is a SaaS application located in the Service Trust Portal that features a dashboard summary of an organization’s data protection, compliance status and documentation details related to GDPR, HIPAA and other requirements.

The Compliance Manager tool provides an automated assessment of Microsoft workloads such as Office 365, Dynamics 365 and some in Azure. The utility suggests ways to boost compliance and data protection in the environment.

Compliance audits often require gathering the same information. Exchange administrators can save some time by using the Compliance Manager tool, which acts as a central repository of audit details and documentation. Admins can maintain this documentation over time and ensure they meet the compliance processes mandated by their teams.

The Compliance Manager tool is still in preview mode; Microsoft said it plans to have all the compliance templates set prior to May 2018, but anyone with an Office 365 subscription can sign up to test it.

For on-premises workloads, the Compliance Manager tool provides the requirements that need to be validated and evaluated by the administrators. Microsoft has not indicated if it will extend the automated assessment feature to any on-premises tools.

Compliance Manager assists administrators with compliance requirements across the different Microsoft workloads with full document management features and task management.

Compliance Manager assessments
The dashboard in the Compliance Manager tool gives a summary of the controls fulfilled by the customer and by Microsoft to meet a standard or regulation.

Compliance Manager breaks down compliance for a standard or regulation into assessments. Each assessment consists of controls mapped to a standard that are shared between Microsoft and the tenant. The dashboard shows which controls a customer and Microsoft have met to comply with a regulation or standard.

Administrators can use the Compliance Manager portal to manage control assignments for team members based on specific compliance requirements. Microsoft calls this task management feature action items, and it allocates different controls to individuals within the organization. This helps organize the tasks needed from each IT worker, such as data or email retention associated with GDPR, that Exchange administrators must complete. The platform enables administrators to set the priority and the individual responsible for it.

There are a few other features in the Compliance Manager tool worth noting:

  • A flexible platform that supports multiple regulations. In the initial preview release of the Compliance Manager tool, the application only supports GDPR, ISO 27001 and ISO 27018. Microsoft said it will add support for HIPAA and other regulatory standards, such as the National Institute of Standards and Technology Special Publication 800-53. Having one tool that covers the range of regulatory compliance requirements makes it a very attractive option for IT and Exchange administrators.
  • Coverage on multiple platforms. After Microsoft introduced Office 365, a number of Exchange Online administrators began to manage more than just Exchange workloads. It’s the responsibility of the IT department to ensure the interdependent workloads associated with Exchange Online meet compliance requirements. Microsoft includes assessments of Dynamics 365, Azure and the full Office 365 suite in the Compliance Manager tool to give IT full visibility into all the workloads under one compliance platform.

Compliance Manager tool shows promise

Microsoft has certainly delivered a good snapshot of what most compliance officers and administrators would like in its preview version of Compliance Manager. However, the tool only addresses three existing compliance requirements, when many in IT will want to see coverage extend to include the Payment Card Industry Data Security Standard, the Sarbanes-Oxley Act, HIPAA, Food and Drug Administration 21 Code of Federal Regulations part 11 and others. 

While there are a number of mature compliance and auditing tools in the market that offer more certifications and regulatory compliance, Compliance Manager eliminates the daunting task for administrators to produce detailed assessments under each of the compliance requirements. Some of this manual work includes interviewing Microsoft technical resources, gathering legal and written statements with certain security configurations, and, in some cases, hiring third-party auditors to validate the findings.

Microsoft will need to cover the rest of the compliance spectrum to encourage administrators to embrace this platform. But the platform is easy to use and addresses many of the concerns organizations have with the upcoming GDPR.

Kick off 2018 with online safety tips from Microsoft’s Council for Digital Good – Microsoft on the Issues

To get everyone off on the right digital foot in 2018, Microsoft’s teen Council for Digital Good offers 15 pieces of guidance, designed to make online interactions safer and healthier by emphasizing resilience, mindfulness and digital civility.

We assembled this impressive group of U.S.-based youth as part of a pilot program, launched in 2017. The 15-member council serves as a sounding board for Microsoft’s youth-focused, online safety policy work. The council met for a two-day summit in August, and will convene again later this year. Last summer, each teen drafted a written manifesto for life online, and then returned home to take on three more assignments: (1) an artistic representation of their individual manifestos, (2) a consolidated written manifesto from the entire cohort, and (3) a visual representation of the cohort manifesto.

Council for Digital Good We thought the start of the new year was a fitting time to share the council’s inspiring words.

The 15-point cohort manifesto

After conference calls, email and social media, each teen selected guidance from his or her individual manifesto thought to be instructive and compelling. “Our (process) worked almost exactly like a democracy,” says Bronte, a 17-year-old council member from Ohio, who submitted the cohort manifesto on behalf of the council. “We would pitch ideas about what manifesto ideas we had, go into detail, and then vote on which we liked best. In the end, we all had a good time working together. We learned a lot about ourselves, and got to know more about a fellow teen from a couple of states away.”

Here are the individual pieces of guidance from each council member, which we’ve grouped into three broad categories: “skills, advice and perspective.” You can find the full cohort manifesto and view some additional context from each council member at this link.

Skills – Some important competencies for life online 

  • Build and repair resilience to ensure young people can bounce back from unpleasant online experiences.
  • Pause and think before sending or posting.
  • Your online profile represents YOU so, be sure to present yourself in the best possible and truthful light.
  • Be a skeptic and question the actions and motivations of others.
  • Instill morals and teach children ethics and etiquette online; it will serve them well online and in adulthood.

 Advice – Additional points that highlight the need for “skills” and “perspective”  

  • Wait until you’re 13 and meet the age requirements to use social media.
  • Take off the mask and be online who you are in real life.
  • Disagree respectfully rather than by lashing out.
  • Your voice and report [to trusted authorities] matter, so take advantage of tech companies’ resources for reporting illegal or harmful content and conduct.
  • Use the power of the internet to empower others.

 Perspective – Thoughts for maintaining a healthy online outlook   

  • Your screen is 2D while real life is 3D – What’s online can fall flat, so always consider context and other factors.
  • Don’t be a prisoner to your cellphone by letting online interactions control your life.
  • Posts affect you and your family, so remember, once you hit send, there’s no taking them back.
  • No, you’re not getting a free iPhone – If a deal sounds too good to be true, it probably is.
  • Focus on the life you live and don’t let technology dominate daily activities.

The teens have offered some great counsel to help everyone lead safer, more productive and more fun-filled lives online. Leading up to international Safer Internet Day on Feb. 6, we’ll be showcasing the teens’ individual “Twitter moments” – the creative manifesto from the full council – where they each discuss their recommended point of guidance. Look for these on social media over the coming weeks.

A vision and mission for the council

In addition to the cohort manifesto, the teens went beyond their specific assignment and drafted an additional vision document, detailing how they see their individual roles on the council, their mission as a team and the impact they want to have.

“We need a ‘youth-inspired revolution’ as there is no silver bullet solution,” the cohort writes. “This is a global issue that requires a global response by everyone: parents, educators, technology companies and government. As the inaugural cohort, the council has devoted our time to advocating for issues around healthy online behaviors.”

The paper goes on to say how the teens will use social media and reach out to youth in their communities to raise awareness of online safety issues and grow support for meaningful programs and initiatives. “This will, in turn, force the hand of school administrations, community leaders, managers or anyone in an authoritative position, to ensure preventative measures are taken. We aim to see a generation of more empathetic young people … After our [term] is up, future council members will build on what we have achieved to create a lasting global movement. From the entirety of the council, to the entire world, everyone deserves and has a right to a safe experience online.” (Read the full council vision document here.)

Looking ahead in 2018

At Microsoft, we characterize these efforts as fostering “digital civility” by promoting safer and healthier online interactions for everyone. We’re driven to grow a kinder, more empathetic and respectful online world, and in this quest, we know we’ve chosen some outstanding partners in this remarkable group of teens.

We look forward to our next in-person event with the council this summer. We’re planning a more public event to discuss some important online safety issues, and we’re asking the teens to share their views with law- and policy-makers and other influentials, as we convene in our nation’s capital.

In the meantime, we’re readying our next round of digital civility research, which will be released on Safer Internet Day 2018 on Feb. 6. Much like the findings we announced in 2017, we’ve polled both teens and adults about their exposure to a range of online risks and forms of abuse. This year, we’re releasing research from 23 countries, up from 14 last year. We’ve also added a few more risks to the study, and some of the results are rather surprising.

Until then, you can follow the Council for Digital Good on our Facebook page and via Twitter using #CouncilforDigitalGood. To learn more about online safety generally, visit our website and resources page; “like” us on Facebook and follow us on Twitter.

Tags: Council for Digital Good, digital civility, Online Safety

Azure Backup service adds layer of data protection

more important to have a solid backup strategy for company data and workloads. Microsoft’s Azure Backup service has matured into a product worth considering due to its centralized management and ease of use.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking. That means that IT admins need a way to streamline backup procedures with the added protection and high availability made possible by the cloud.

Azure Backup protects on-premises workloads — SharePoint, SQL Server, Exchange, file servers, client machines, VMs, and cloud resources like infrastructure-as-a-service VMs — into one recovery vault with solid data protection and restore capabilities. Administrators can monitor and start backup and recovery activities from a single Azure-based portal. After the initial setup, this arrangement lightens the burden on IT because off site backups require minimal time and effort to maintain.

How Azure Backup works

The Azure Backup service stores data in what Microsoft calls a recovery vault, which is the central storage locker for the service whether the backup targets are in Azure or on premises.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking.

The administrator needs to create the recovery vault before the Azure Backup service can be used. From the Azure console, select All services, type in Recovery Services and select Recovery Services vaults from the menu. Click Add, give it a name, associate it with an Azure subscription, choose a resource group and location, and click Create.

From there, to back up on-premises Windows Server machines, open the vault and click the Backup button. Azure will prompt for certain information: whether the workload is on premises or in the cloud and what to back up — files and folders, VMs, SQL Server, Exchange, SharePoint instances, system state information, and data to kick off a bare-metal recovery. When this is complete, click the Prepare Infrastructure link.

[embedded content]

Configure backup for a Windows machine

The Microsoft Azure Recovery Services Agent (MARS) handles on-premises backups. Administrators download the MARS agent from the Prepare Infrastructure link — which also supplies the recovery vault credentials — and install it on the machines to protect. MARS picks up the recovery vault credentials to link the MARS agent instances of the on-premises machine to the Azure subscription and attendant recovery vault.

Azure Backup pricing

Microsoft determines Azure Backup pricing based on two components: the number of protected VMs or other instances — Microsoft charges for each discrete item to back up — and the amount of backup data stored within the service. The monthly pricing is:

  • for instances up to 50 GB, each instance is $5 per month, plus storage consumed;
  • for instances more than 50 GB, but under 500 GB, each instance is $10, plus storage consumed; and
  • for instances more than 500 GB, each instance is $10 per nearest 500 GB increment, plus storage consumed.

Microsoft bases its storage prices on block blob storage rates, which vary based on the Azure region. While it’s less expensive to use locally redundant blobs than geo-redundant blobs, local blobs are less fault-tolerant. Restore operations are free; Azure does not charge for outbound traffic from Azure to the local network.

Pros and cons of the Azure Backup service

The service has several features that are beneficial to the enterprise:

  • There is support to back up on-premises VMware VMs. Even though Azure is a Microsoft cloud service, the Azure Backup product will take VMware VMs as they are and back them up. It’s possible to install the agent inside the VM on the Windows Server workload, but it’s neater and cleaner to just back up the VM.
  • Administrators manage all backups from one console regardless of the target location. Microsoft continually refines the management features in the portal, which is very simple to use.
  • Azure manages storage needs and automatically adjusts as required. This avoids the challenges and capacity limits associated with on-premises backup tapes and hard drives.

The Azure Backup service isn’t perfect, however.

  • It requires some effort to understand pricing. Organizations must factor in what it protects and how much storage those instances will consume.
  • The Azure Backup service supports Linux, but it requires the use of a customized copy of System Center Data Protection Manager (DPM), which is more laborious compared to the simplicity and ease of MARS.
  • Backing up Exchange, SharePoint and SQL workloads requires the DPM version that supports those products. Microsoft includes it with the service costs, so there’s no separate licensing fee, but it still requires more work to deploy and understand.

The Azure Backup service is one of the more compelling administrative offerings from Microsoft. I would not recommend it as a company’s sole backup product — local backups are still very important, and even more so if time to restore is a crucial metric for the enterprise — but Azure Backup is a worthy addition to a layered backup strategy.

Reddit Partners With Microsoft to Deliver Intelligent Search Results and Business Intelligence Tools

Today, we’re thrilled to announce a partnership two years in the making, harnessing the power of Microsoft’s artificial intelligence technology to deliver the best possible search experiences for Bing users and analytical tools for brands and marketers.

As our co-founder Alexis Ohanian shared earlier today in his presentation at Microsoft’s AI conference, we’re kicking off the partnership by unveiling two new products that apply Microsoft’s AI tools to Reddit’s vast data:

  • Bing Intelligent Search: first-ever search experiences returning Reddit results for AMAs, Questions, and Communities; and
  • The Power BI suite of solution templates for brand management and targeting on Reddit: a comprehensive set of business intelligence solutions enabling brands, marketers, and budget owners to quickly analyze their Reddit footprint and determine how, where, and with whom to engage in the Reddit community.

The Content You Love, Now Easier to Find

Reddit is home to the most engaging and authentic conversations online with more than 2.5 million comments per day across 138,000 active communities. Now, Bing is making it easier than ever to surface that content through smarter search results.

As part of the Intelligent Search features launching today, Bing users will notice three key search experiences that return the most relevant content possible from Reddit’s vast community:

Desktop view of Bing’s cross-platform Reddit community search experience

If you type in “reddit” and the name of a Reddit community, like “aww” or “data is beautiful,” the relevant subreddits will show up in a dedicated results section. Better still, Bing will surface a sneak peak of the community with the top conversations of the day.

Mobile view of Bing’s cross-platform Reddit AMA search experience

If you type in the name of someone who’s done an AMA—let’s say, Dr. Jane Goodall—their past or upcoming AMAs will surface along with additional information about Jane. l. Bing will give snapshots of completed AMAs and, for users just searching for “Reddit AMAs” in general, a carousel of popular AMAs.

Mobile view of Bing’s cross-platform Reddit Q&A search experience

Last but certainly not least, Bing users whose queries are best answered with relevant Reddit conversations, will see those conversations at the top of the page, easily getting perspectives from millions of Reddit users.

Best of all, Bing has launched their suite of Reddit-tailored results across platforms, so whether you’re on your phone wondering what the best free online certificates are that will look good on your resume or on desktop looking for a community devoted to baby elephant gifs, Bing users will have access to the most relevant Reddit content right at their fingertips.

As this initial set of experiences roll out, we will work with Bing to collect feedback and continue refining and adding features as we understand how people are using this experience.

Better Brand Insights from Reddit

But wait: there’s more!

In addition to launching an all-new consumer search experience for Reddit content on Bing, our partnership with Microsoft also leverages our relationship with Socialgist to bring Reddit’s data and API to Power BI, a comprehensive set of business modules that utilizes Microsoft’s AI and analytics to help brands and marketers analyze data and share insights.

As we developed our partnership with Microsoft, we found that Power BI offered incredibly robust, and easy-to-consume business intelligence tools to turn Reddit data into clear, useful insights. Power BI’s data analysis offers flexibility to customize and even build your own modules. It also enables dynamic querying, which allows users to easily re-run and change queries in real time and customize data visualizations through multiple variables.

With the general availability of the Power BI solution templates suite for brand management and targeting on Reddit, brands and marketers can now monitor, target, and compare their Reddit footprint across communities in a flexible, convenient format. Brands can use this solution to see which communities are talking about them the most, uncover the sentiment of those conversations, and receive detailed breakdowns of their mentions and sentiment over time. Learn more about the Power BI Solution Templates here.

We hope today’s announcement is just the first step in exploring the opportunities to bring relevant content, communities, and insights from our users to brands and consumers alike. In the meantime, check out Bing’s improved search experience for yourself and learn more about Microsoft Power BI here.

‘Aha, now I get it!’ Microsoft is building technology to put numbers in perspective – The AI Blog

When people in the United States ask Microsoft’s search engine Bing how big Syria is, they learn the country is 71,498 square miles and about equal to the size of Florida. When they ask Bing how many calories are in a serving of ice cream, they learn that a scoop contains 137 calories, which is equal to about 11 minutes of running.

These two-part answers supplied by Bing are early, real-world examples of a technology being developed inside Microsoft’s research labs to help us make sense of the jumble of numbers we increasingly encounter in the digital world.

“We want to reduce the number of times that people read a number and can’t make sense of it. And we want to do that by providing some context, or an analogy, or perspective, that puts it in more familiar terms usually related to their everyday experience,” said Jake Hofman, a senior researcher in Microsoft’s New York research lab.

The need for a new way to understand numbers stems from the overwhelming abundance of data now available to help us make decisions about everything from federal budgets to personal health and environmental conservation, noted Dan Goldstein, a principal researcher in Microsoft’s New York research lab.

“The solution is a relatively low-tech one. Using perspective sentences is very simple and they help a lot,” he said. “What we’re finding is creating them is a difficult challenge because it requires not only understanding the proper numbers to compare the numbers to, but also understanding what people are familiar with, what kinds of comparisons people like, what kinds of things people can easily imagine.”

On the road to AI

The examples on Bing today are only available for a few specific subjects and required human input to develop. Ultimately, the Microsoft researchers aim to build a service that automatically generates perspectives for any number and communicates them with the ease of a skilled storyteller or teacher. This service would be able to pass a test for general artificial intelligence posed in 1950 by the British computer theorist Alan Turing.

“You would be very sure you were talking to a machine if it says 248,572 square miles as opposed to roughly the size of Texas when you asked it how big France was,” said Goldstein. “To pass the Turing test, you have to talk like a human; someone who can explain something in a way that is personalized to the audience.”

The road to this generalized, automated technology that takes raw numbers from sources such as email, social media feeds and search results and puts them in a personalized context is filled with hurdles. To clear them requires a deep understanding of the nuance and complexity of what makes humans human.

Microsoft’s New York research lab, where Hofman and Goldstein are based, is well suited to clear this hurdle, noted David Pennock, a principal researcher and the lab’s assistant managing director. The lab brings together social scientists and computer scientists to study not just computers, but people and how people behave with computers.

“There’s an extra piece that is important for AI, which is taking the result of the complex algorithm that does all its magic and then actually putting it in a presentable form for people,” said Pennock. “If you want to run a data-driven company, yes you want all the great data; yes, you want to run all the right experiments; and yes, you want to make decisions based on your data. But ultimately, you need it in a form that is presentable to a person who in the end makes the decision.”

Numbers in the news

Hofman and Goldstein started down this road on October 30, 2012. The researchers remember the day because it fell the day after Superstorm Sandy slammed the East Coast. They fought snarled traffic to reach an off-site meeting where they had a brainstorming session on new research directions.

“We proposed the idea of trying to make numbers in the news make sense to the average person,” said Hofman. “Everyone nodded and said, ‘Yeah, that sounds like a good idea.’ We had no idea how good of an idea it was, or wasn’t, or how hard of a problem it was to solve.”

To begin, the researchers recruited people to participate in an online experiment designed to quantify the value of perspectives for the comprehension of unfamiliar numbers. Some participants generated perspective sentences for numbers taken from news articles and others took a series of randomized tests to determine if the perspectives improved recall, estimation and error detection.

For example, a news article noted that “Americans own almost 300 million firearms.” That fact alone might be difficult to estimate or believe if never seen before, and recall even if seen in the past. The researchers found comprehension of U.S. gun ownership improved with the perspective that “300 million firearms is about one firearm for every person in the United States.”

The finding that perspective sentences help people understand numbers in the news prompted the researchers to begin teasing apart why perspectives work. Does merely the repetition of numbers increase memory? Do perspectives add fodder for our brains to noodle over and associate with, leading to more stuff to pull on when it comes time for recall? Do perspectives stake mental flags?

What’s more, are some perspectives better than others? Take the area of Pakistan, for example, which is 307,373 square miles. What comparative rank or measure best helps people understand how big – how much land – 307,373 square miles is? Perhaps, how long it would take to drive across? Or how big it is compared to U.S. states? If comparing to states, which state? Is twice the size of California more helpful than five times larger than Georgia?

“How do you figure out which of those is better? How do you do that in a principled way?” said Chris Riederer, who interned with Hofman and Goldstein while pursuing his Ph.D. at Columbia University and co-authored a paper that describes this phase of the research. “Essentially, what we did is we ran a big survey.”

Study participants compared country sizes and populations to the sizes and populations of various U.S. states. The results show that familiar states combined with simple multipliers, even if less precise, are best. For example, people in the U.S. grasp the area of Pakistan more easily when expressed as roughly twice the size of California than the technically more accurate five times larger than Georgia.

These findings were used to generate the country-area perspectives live on Bing today. Ask the search engine, “How big is Pakistan?” and you’ll learn the square-mile fact along with the pre-computed comparison to California.

Bing and beyond

Bing’s question and answer team is working on additional perspectives to increase comprehension of everything from gas mileage to planet sizes. Bing’s food and drink team deployed perspectives that express calories in terms of minutes of running, protein and sodium in percent of the daily recommendation, grams of sugar in teaspoons of sugar and milligrams of caffeine in cups of coffee.

The decision on how to express each perspective – calories in minutes running versus walking, for example – involves brainstorming over email between the Bing and research teams as well as analysis of data from search logs and surveys, explained Christina Ntouniaoglou, a program manager for Bing’s food and drink team.

“I was thinking it is walking. Why would it be running? There are people who cannot really run. But the survey proved that people actually like the running part, so we went with that,” she said.

The next challenge, said Hofman, is to build a system that automatically creates perspectives so that people can more easily use all the data we have access to today to make informed decisions.

“Computers have lots of facts in lots of databases, but they don’t really know how to rank those facts as more or less useful, or comprehensible, to humans,” he said. “That is the last remaining hurdle – big hurdle – that we need to clear in this project.”

Hofman and Goldstein are applying the latest advances in machine learning, a branch of artificial intelligence, and data analysis to clear this hurdle. Their eyes are fixed on the goal of a generalized service that operates as a plug-in to browsers, email programs and text editors that automatically generates relevant, personalized perspectives for any numbers the users encounter or write.

“If we were infinitely wise and infinitely good at calculating, it wouldn’t really matter how numbers are expressed, it would all be the same to us. But the fact is, some things really cause people to go ‘Aha, now I get it,’” said Goldstein. “This is new territory; looking at how to communicate numbers in a way that gives people insight and memory and comprehension.”

The half decade Hofman has spent on the research project, he said, has already planted perspectives in his brain.

“I am always in the background thinking, ‘Am I presenting this in the most comprehensible way?’”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

Microsoft announces private preview, partnerships for AI-powered health bot project

Today, we’re pleased to announce the private preview of a new AI-powered project from Microsoft’s Healthcare NExT initiative  which is designed to enable our healthcare partners to easily create intelligent and compliant healthcare virtual assistants and chatbots. These bots are powered by cognitive services and enriched with authoritative medical content, allowing our partners to empower their customers with self-service access to health information, with the goal of improving outcomes and reducing costs. So, if you’re using a health bot built by one of our partners as part of our project, you can interact in a personal way, typing or talking in natural language and receiving information to help answer your health-related questions.

Our partners, including Aurora Health Care, with 15 hospitals, over 150 clinics and 70 pharmacies throughout eastern Wisconsin and northern Illinois, Premera Blue Cross, the largest health plan in the Pacific Northwest, and UPMC, one of the largest integrated health care delivery networks in the United States, are working with us to build out bots that address a wide range of healthcare-specific questions and use cases. For instance, insurers can build bots that give their customers an easy way to look up the status of a claim and ask questions about benefits and services. Providers, meanwhile, can build bots that triage patient issues with a symptom checker, help patients find appropriate care, and look up nearby doctors.

At Aurora Health Care, patients can use the “Aurora Digital Concierge” to determine what type of care they might need and when they might need it. Patients interact with the bot in natural language – answering a set of questions about their symptoms – and then the bot suggests what could be possible causes and what type of doctor they might want to see and when. They can also schedule an appointment with just a few clicks. This is an example of how AI can have direct impact on people’s everyday lives, helping patients find the most relevant care and helping doctors focus on the highest-priority cases.

“Aurora Health Care is focused on delivering a seamless experience for our consumers and the health bot allows us to introduce technology to make that happen. The use of AI allows us to leverage technology to meet consumers where they are; online, mobile, chat, text, and to help them navigate the complexity of healthcare,” said Jamey Shiels, Vice President Digital Experience, Aurora Health Care.

At Microsoft, we believe there is an enormous opportunity to use intelligent bots to make healthcare more efficient and accessible to every individual and organization. Our goal is to amplify human ingenuity with intelligent technology, and we’re doing that in healthcare by infusing AI into solutions that can help patients, providers, and payers. 

We are incubating the health bot project as part of Healthcare NExT, a new initiative at Microsoft to dramatically transform healthcare by deeply integrating greenfield research and health technology product development, in partnership with the healthcare industry’s leading players. Through these collaborations, our goal is to enable a new wave of innovation and impact in healthcare using Microsoft’s deep AI expertise and global-scale cloud.

Today, for instance, it can be particularly difficult for our healthcare partners to build bots that address the stringent compliance and regulatory requirements of the healthcare industry, and to integrate complex medical vocabularies. Our health bot project is designed to make this simple by providing an easy to use visual editor tool that partners can use to build and extend their bots, an array of healthcare-specific configuration options, out-of-the-box symptom checker content, as well as easy integration with partner systems and with our set of cognitive services.

We are introducing a private preview program that will allow new partners to participate in the project; partners will be able to sign up on our website. The program includes built-in Health Insurance Portability and Accountability Act (HIPAA) compliance – a prerequisite for usage by any covered entity. It also includes access to the visual editor tools that partners can easily use to customize and extend bot scenarios, documentation and code samples published on Microsoft Docs, and pre-built integration with the Health Navigator symptom checker.

We’re extremely excited for the potential of our project to help people get better care and navigate the healthcare process more efficiently. Following the private preview, we will have more information to share for general availability.

For more information on the health bot project, please visit: https://www.microsoft.com/en-us/research/project/health-bot/

Debugging data: Microsoft researchers look at ways to train AI systems to reflect the real world – The AI Blog

Photo of Microsoft researcher Hanna Walach
Hanna Wallach is a senior researcher in Microsoft’s New York City research lab. Photo by John Brecher.

Artificial intelligence is already helping people do things like type faster texts and take better pictures, and it’s increasingly being used to make even bigger decisions, such as who gets a new job and who goes to jail. That’s prompting researchers across Microsoft and throughout the machine learning community to ensure that the data used to develop AI systems reflect the real world, are safeguarded against unintended bias and handled in ways that are transparent and respectful of privacy and security.

Data is the food that fuels machine learning. It’s the representation of the world that is used to train machine learning models, explained Hanna Wallach, a senior researcher in Microsoft’s New York research lab. Wallach is a program co-chair of the Annual Conference on Neural Information Processing Systems from Dec. 4 to Dec. 9 in Long Beach, California. The conference, better known as “NIPS,” is expected to draw thousands of computer scientists from industry and academia to discuss machine learning – the branch of AI that focuses on systems that learn from data.

“We often talk about datasets as if they are these well-defined things with clear boundaries, but the reality is that as machine learning becomes more prevalent in society, datasets are increasingly taken from real-world scenarios, such as social processes, that don’t have clear boundaries,” said Wallach, who together with the other program co-chairs introduced a new subject area at NIPS on fairness, accountability and transparency. “When you are constructing or choosing a dataset, you have to ask, ‘Is this dataset representative of the population that I am trying to model?’”

Kate Crawford, a principal researcher at Microsoft’s New York research lab, calls it “the trouble with bias,” and it’s the central focus of an invited talk she will be giving at NIPS.

“The people who are collecting the datasets decide that, ‘Oh this represents what men and women do, or this represents all human actions or human faces.’ These are types of decisions that are made when we create what are called datasets,” she said. “What is interesting about training datasets is that they will always bear the marks of history, that history will be human, and it will always have the same kind of frailties and biases that humans have.”

Researchers are also looking at the separate but related issue of whether there is enough diversity among AI researchers. Research has shown that more diverse teams choose more diverse problems to work on and produce more innovative solutions. Two events co-located with NIPS will address this issue: The 12thWomen in Machine Learning Workshop, where Wallach, who co-founded Women in Machine Learning, will give an invited talk on the merger of machine learning with the social sciences, and the Black in AI workshop, which was co-founded by Timnit Gebru, a post-doctoral researcher at Microsoft’s New York lab.

“In some types of scientific disciplines, it doesn’t matter who finds the truth, there is just a particular truth to be found. AI is not exactly like that,” said Gebru. “We define what kinds of problems we want to solve as researchers. If we don’t have diversity in our set of researchers, we are at risk of solving a narrow set of problems that a few homogeneous groups of people think are important, and we are at risk of not addressing the problems that are faced by many people in the world.”

Timnit Gebru is a post-doctoral researcher at Microsoft’s New York City research lab. Photo by Peter DaSilva.

Machine learning core

At its core, NIPS is an academic conference with hundreds of papers that describe the development of machine learning models and the data used to train them.

Microsoft researchers authored or co-authored 43 accepted conference papers. They describe everything from the latest advances in retrieving data stored in synthetic DNA to a method for repeatedly collecting telemetry data from user devices without compromising user privacy.

Nearly every paper presented at NIPS over the past three decades considers data in some way, noted Wallach. “The difference in recent years, though,” she added, “is that machine learning no longer exists in a purely academic context, where people use synthetic or standard datasets. Rather, it’s something that affects all kinds of aspects of our lives.”

The application of machine-learning models to real-world problems and challenges is, in turn, bringing into focus issues of fairness, accountability and transparency.

“People are becoming more aware of the influence that algorithms have on their lives, determining everything from what news they read to what products they buy to whether or not they get a loan. It’s natural that as people become more aware, they grow more concerned about what these algorithms are actually doing and where they get their data,” said Jenn Wortman Vaughan, a senior researcher at Microsoft’s New York lab.

The trouble with bias

Data is not something that exists in the world as an object that everyone can see and recognize, explained Crawford. Rather, data is made. When scientists first began to catalog the history of the natural world, they recognized types of information as data, she noted. Today, scientists also see data as a construct of human history.

Crawford’s invited talk at NIPS will highlight examples of machine learning bias such as news organization ProPublica’s investigation that exposed bias against African-Americans in an algorithm used by courts and law enforcement to predict the tendency of convicted criminals to reoffend, and then discuss how to address such bias.

“We can’t simply boost a signal or tweak a convolutional neural network to resolve this issue,” she said. “We need to have a deeper sense of what is the history of structural inequity and bias in these systems.”

One method to address bias, according to Crawford, is to take what she calls a social system analysis approach to the conception, design, deployment and regulation of AI systems to think through all the possible effects of AI systems. She recently described the approach in a commentary for the journal Nature.

Crawford noted that this isn’t a challenge that computer scientists will solve alone. She is also a co-founder of the AI Now Institute, a first-of-its-kind interdisciplinary research institute based at New York University that was launched in November to bring together social scientists, computer scientists, lawyers, economists and engineers to study the social implications of AI, machine learning and algorithmic decision making.

Jenn Wortman Vaughan is a senior researcher at Microsoft’s New York City research lab. Photo by John Brecher.

Interpretable machine learning

One way to address concerns about AI and machine learning is to prioritize transparency by making AI systems easier for humans to interpret. At NIPS, Vaughan, one of the New York lab’s researchers, will give a talk describing a large-scale experiment that she and colleagues are running to learn what factors make machine learning models interpretable and understandable for non-machine learning experts.

“The idea here is to add more transparency to algorithmic predictions so that decision makers understand why a particular prediction is made,” said Vaughan.

For example, does the number of features or inputs to a model impact a person’s ability to catch instances where the model makes a mistake? Do people trust a model more when they can see how a model makes its prediction as opposed to when the model is a black box?

The research, said Vaughan, is a first step toward the development of “tools aimed at helping decision makers understand the data used to train their models and the inherent uncertainty in their models’ predictions.”

Patrice Simard, a distinguished engineer at Microsoft’s Redmond, Washington, research lab who is a co-organizer of the symposium, said the field of interpretable machine learning should take a cue from computer programming, where the art of decomposing problems into smaller problems with simple, understandable steps has been learned. “But in machine learning, we are completely behind. We don’t have the infrastructure,” he said.

To catch up, Simard advocates a shift to what he calls machine teaching – giving machines features to look for when solving a problem, rather than looking for patterns in mountains of data. Instead of training a machine learning model for car buying with millions of images of cars labeled as good or bad, teach a model about features such as fuel economy and crash-test safety, he explained.

The teaching strategy is deliberate, he added, and results in an interpretable hierarchy of concepts used to train machine learning models.

Researcher diversity

One step to safeguard against unintended bias creeping into AI systems is to encourage diversity in the field, noted Gebru, the co-organizer of the Black in AI workshop co-located with NIPS. “You want to make sure that the knowledge that people have of AI training is distributed around the world and across genders and ethnicities,” she said.

The importance of researcher diversity struck Wallach, the NIPS program co-chair, at her fourth NIPS conference in 2005. For the first time, she was sharing a hotel room with three roommates, all of them women. One of them was Vaughan, and the two of them, along with one of their roommates, co-founded the Women in Machine Learning group, which is now in its 12th year and has held a workshop co-located with NIPS since 2008. This year, more than 650 women are expected to attend.

Wallach will give an invited talk at the Women in Machine Learning Workshop about how she applies machine learning in the context of social science to measure unobservable theoretical constructs such as community membership or topics of discussion.

“Whenever you are working with data that is situated within society contexts,” she said, “necessarily it is important to think about questions of ethics, fairness, accountability, transparency and privacy.”

Related: 

John Roach writes about Microsoft research and innovation. Follow him on Twitter.