Category Archives: Microsoft News

Microsoft News

Get the Best Deal of the Season for Xbox Game Pass and Forza Games – Xbox Wire

With Xbox Game Pass, you can experience the ultimate value and freedom to play over 100 great games, including new Xbox One games from Microsoft Studios the day they release.

Microsoft is excited to announce a special offer that unites Forza and Xbox Game Pass fans. Starting today, get the best deal of the season for Xbox Game Pass and Forza games, just in time to hone your skills for the Forza Horizon 4 launch on October 2. For a limited time, get a year of Xbox Game Pass ($120 value), Forza Horizon 3, and Forza Motorsport 7 to keep – all for just $99. This offer is open to new as well as existing Xbox Game Pass members, starting September 13 through September 30, so get it today!

You can take advantage of this offer on Xbox.com or from your Xbox One console. You can begin playing games with Xbox Game Pass immediately and will receive codes to download Forza Horizon 3 and Forza Motorsport 7 via Xbox Message Center, likely within 7-10 days, but no later than October 21, 2018.

If you have a knack for racing or have just always been interested in giving the Forza series a try, this is the offer for you. Not only will you receive access to Forza Horizon 4 the day it launches on October 2, but you can also check out the most recent titles from the Forza Horizon and Motorsport series with this limited-time offer.

Forza Motorsport 7 lets you experience the thrill of motorsport at its limit with the most comprehensive, beautiful, and authentic racing game ever made. Forza Horizon 3 puts you in charge of the Horizon Festival where you can customize everything, hire and fire your friends, and explore Australia in over 350 of the world’s greatest cars. Make your Horizon the ultimate celebration of cars, music, and freedom of the open road. How you get there is up to you!

From recent blockbusters to critically-acclaimed indie titles, Xbox Game Pass lets you discover and download games you’ve always wanted to play or revisit favorites that you’ve been missing. With new games added every month, and the option to cancel anytime, Xbox Game Pass is your ticket to endless play.

Stay tuned to Xbox Wire for more news on Xbox Game Pass and all things Forza Motorsport. For the latest in Xbox Game Pass news, follow us on Twitter and Instagram. Until next month, game on!

Network Business Systems and Microsoft announce agreement to deliver broadband internet to rural communities in Illinois, Iowa and South Dakota – Stories

The partnership will benefit hundreds of thousands of unserved and underserved people

REDMOND, Wash. — Sept. 13, 2018 — On Thursday, Network Business Systems Inc., an Illinois-based wireless internet provider, and Microsoft Corp. announced a new agreement to deliver broadband internet access to rural communities in Illinois, Iowa and South Dakota, including approximately 126,700 people who are currently unserved.

This partnership addresses a critical need, as approximately 36 percent of people living in rural Illinois, 22 percent in rural Iowa and 25 percent in rural South Dakota lack access to broadband internet. In today’s digital economy, broadband internet access is a necessity, enabling people and small businesses to take advantage of advancements in technology, including education, healthcare and precision agriculture, and access a range of cloud-based services to run their businesses and improve their lives.

The partnership with Network Business Systems is part of the Microsoft Airband Initiative, which aims to extend broadband access to 2 million unserved people in rural America by July 4, 2022. Network Business Systems will construct and deploy wireless internet access networks using a mix of technologies including TV white spaces — vacant spectrum that can travel over long distances and rough terrain, including the heavy foliage that is common in the Midwestern landscape.

“Everyone deserves to have access to broadband no matter where they live because access to broadband is access to digital opportunity,” said Shelley McKinley, Microsoft’s head of Technology and Corporate Responsibility. “Our partnership with Network Business Systems will help ensure that hundreds of thousands of people in Illinois, Iowa and South Dakota can participate in the 21st century economy.”

“Bringing broadband internet to underserved areas is more important than ever, especially as industries including education, healthcare and business are depending more on internet access,” said Kari Hofmann, general manager of Network Business Systems. “We are very glad that Microsoft is investing the money in championing the further use of TV white spaces.”

Across the U.S., 19.4 million people in rural areas lack access to broadband internet. The Microsoft Airband Initiative is focused on bringing broadband coverage to people living in rural America through commercial partnerships and investment in digital skills training for people in the newly connected communities. Proceeds from Airband connectivity projects will be reinvested into the program to expand broadband to more rural areas.

About Network Business Systems Inc.

Network Business Systems Inc. has been providing rural broadband services for over 18 years. Being a technology consulting company, NBS identified a long-term need for rural internet providers that would be able to grow with the demand that rural America has, while handling the financial challenges that go along with providing rural broadband. NBS continues to provide high speed internet networks by partnering with local governments and agricultural companies. By partnering with local companies, we are able to keep costs down and provide robust internet connections and the lowest price possible to the consumer, while paying a living wage to our employees. NBS is a rural broadband provider to the residential, small business, and to enterprise sized businesses that need carrier grade connections with SLA agreements that are just as reliable as a fiber connection in urban cities.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, +1 (425) 638-7777,

rrt@we-worldwide.com

Network Business System Inquiries, Kari Hofmann, general manager of Network Business Systems, +1 (309) 944-8823, ext. 101, kari@nbson.com. For more information on NBS or connectivity information: www.nbson.com or 888.944.8823.

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Data guru living with ALS modernizes industries by typing with his eyes – Stories

The couple remodeled their two-story townhouse near Guatemala City so he had everything he needed on the first floor and didn’t have to navigate stairs. Otto learned to use a trackball mouse with his foot to type with an on-screen keyboard. But it was cumbersome, and he needed Pamela nearby to move the cursor from one corner of his two 32-inch screens to another as he navigated Excel spreadsheets and Power BI dashboards.

A tracheotomy was put in his throat to help him breathe, taking away his limited speech and increasing his isolation. But when Knoke, who spends two hours a day reading blogs and researching, saw his friend Juan Alvarado’s post about the new Eye Control feature in Windows 10, he let loose with his version of a shout and immediately ordered the Tobii Eye Tracker hardware to use with the software.

A man sits in a wheelchair while four women and two men crouch around him, touching his shoulders and arms and smiling.
Otto Knoke with his wife, daughters and sons-in-law. Photo provided by Pamela Knoke.

Alvarado, who met Knoke as a database consultant working on the ATM system Knoke had implemented, hadn’t known about Knoke’s condition until he suddenly saw him in a wheelchair one day. And fittingly, Eye Control itself began with a wheelchair.

Microsoft employees, inspired by former pro football player Steve Gleason, who had lost the use of his limbs to ALS, outfitted a wheelchair with  electronic gadgets to help him drive with his eyes during the company’s first Hackathon, in 2014. The project was so popular that a new Microsoft Research team was formed to explore the potential of eye-tracking technology to help people with disabilities, leading to last year’s release of Eye Control for Windows 10.

Knoke said it was “a joy” to learn how to type with his eyes, getting the feel of having sensors track his eye movements as he navigated around the screen and rested his gaze on the elements he wanted to click. Using Eye Control and the on-screen keyboard, he now can type 12 words a minute and creates spreadsheets, Power BI dashboards and even PowerPoint presentations. Combined with his foot-operated mouse, his productivity has doubled. He plans to expand his services to the U.S., where he spent six years studying and working in the 1970s. He no longer relies on his wife’s voice, because Eye Control offers a text-to-speech function as well.

“It was frustrating trying to be understood,” Knoke said in the email interview. “After a few days of using Eye Control I became so independent that I did not need someone to interact with clients when there were questions or I needed to explain something. We have a remote session to the client’s computer, and we open Notepad and interact with each other that way.”

His wife and his nurse had learned to understand the sounds he was able to make, even with the tracheotomy restricting his vocal chords. But now he can communicate with his three grown daughters, his friends and all his customers.

A man lies in a reclining chair, while a younger woman sits in a chair next to him. Both are smiling and looking at a computer screen with a Spanish phrase typed on it.
Using a foot-operated mouse, Eye Control for Windows 10 and the text-to-speech function, Otto Knoke is able to communicate with his family — including his daughter, seen here — as well as with clients.

“Now when our children visit, he can be not just nodding at what they say, but he can be inside the conversation, too,” Pamela Knoke said. “He always has a big smile on his face, because he’s got his independence back.”

He’s also started texting jokes to friends again.

“It’s kind of like it brought my friend back, and it’s amazing,” Alvarado said. “Otto told me that for him, it was like eye tracking meant his arms can move again.”

Being able to text message with Eye Control has helped his business as well.

Grupo Tir, a real-estate development and telecommunications business in Guatemala, hired Knoke for several projects, including streamlining its sales team’s tracking of travel expenses with Power BI.

“Working with Otto has been amazing,” said Grupo Tir Chief Financial Officer Cristina Martinez. “We can’t really meet with him, so we usually work with texts, and it’s like a normal conversation.

“He really has no limitations, and he always is looking for new ways to improve and to help companies.”

Data Center Scale Computing and Artificial Intelligence with Matei Zaharia, Inventor of Apache Spark

Matei Zaharia, Chief Technologist at Databricks & Assistant Professor of Computer Science at Stanford University, in conversation with Joseph Sirosh, Chief Technology Officer of Artificial Intelligence in Microsoft’s Worldwide Commercial Business


At Microsoft, we are privileged to work with individuals whose ideas are blazing a trail, transforming entire businesses through the power of the cloud, big data and artificial intelligence. Our new “Pioneers in AI” series features insights from such pathbreakers. Join us as we dive into these innovators’ ideas and the solutions they are bringing to market. See how your own organization and customers can benefit from their solutions and insights.

Our first guest in the series, Matei Zaharia, started the Apache Spark project during his PhD at the University of California, Berkeley, in 2009. His research was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in Computer Science. He is a co-founder of Databricks, which offers a Unified Analytics Platform powered by Apache Spark. Databricks’ mission is to accelerate innovation by unifying data science, engineering and business. Microsoft has partnered with Databricks to bring you Azure Databricks, a fast, easy, and collaborative Apache Spark based analytics platform optimized for Azure. Azure Databricks offers one-click set up, streamlined workflows and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts to generate great value from data faster.

So, let’s jump right in and see what Matei has to say about Spark, machine learning, and interesting AI applications that he’s encountered lately.

Video and podcast versions of this session are available at the links below. The podcast is also available from your Spotify app and via Stitcher. Alternatively, just continue reading the text version of their conversation below, via this blog post.

Joseph Sirosh: Matei, could you tell us a little bit about how you got started with Spark and this new revolution in analytics you are driving?

Matei Zaharia: Back in 2007, I started doing my PhD at UC Berkeley and I was very interested in data center scale computing, and we just saw at the time that there was an open source MapReduce implementation in Apache Hadoop, so I started early on by looking at that. Actually, the first project was profiling Hadoop workloads to identify some bottlenecks and, as part of that, we made some improvements to the Hadoop job scheduler and that actually went into Hadoop and I started working with some of the early users of that, especially Facebook and Yahoo. And what we saw across all of these is that this type of large data center scale computing was very powerful, there were a lot of interesting applications they could do with them, but just the map-reduce programming model alone wasn’t really sufficient, especially for machine learning – that’s something everyone wanted to do where it wasn’t a good fit but also for interactive queries and streaming and other workloads.

So, after seeing this for a while, the first project we built was the Apache Mesos cluster manager, to let you run other types of computations next to Hadoop. And then we said, you know, we should try to build our own computation engine which ended up becoming Apache Spark.

JS: What was unique about Spark?

MZ: I think there were a few interesting things about it. One of them was that it tried to be a general or unified programming model that can support many types of computations. So, before the Spark project, people wanted to do these different computations on large clusters and they were designing specialized engines to do particular things, like graph processing, SQL custom code, ETL which would be map-reduce, they were all separate projects and engines. So in Spark we kind of stepped back at them and looked at these and said is there any way we can come up with a common abstraction that can handle these workloads and we ended up with something that was a pretty small change to MapReduce – MapReduce plus fast data sharing, which is the in-memory RDDs in Spark, and just hooking these up into a graph of computations turned out to be enough to get really good performance for all the workloads and matched the specialized engines, and also much better performance if your workload combines a bunch of steps. So that is one of the things.

I think the other thing which was important is, having a unified engine, we could also have a very composable API where a lot of the things you want to use would become libraries, so now there are hundreds maybe thousands of third party packages that you can use with Apache Spark which just plug into it that you can combine into a workflow. Again, none of the earlier engines had focused on establishing a platform and an ecosystem but that’s why it’s really valuable to users and developers, is just being able to pick and choose libraries and arm them.

JS: Machine Learning is not just one single thing, it involves so many steps. Now Spark provides a simple way to compose all of these through libraries in a Spark pipeline and build an entire machine learning workflow and application. Is that why Spark is uniquely good at machine learning?

MZ: I think it’s a couple of reasons. One reason is much of machine learning is preparing and understanding the data, both the input data and also actually the predictions and the behavior of the model, and Spark really excels at that ad hoc data processing using code – you can use SQL, you can use Python, you can use DataFrames, and it just makes those operations easy, and, of course, all the operations you do also scale to large datasets, which is, of course, important because you want to train machine learning on lots of data.

Beyond that, it does support iterative in-memory computation, so many algorithms run pretty well inside it, and because of this support for composition and this API where you can plug in libraries, there are also quite a few libraries you can plug in that call external compute engines that are optimized to do different types of numerical computation.

JS: So why didn’t some of these newer deep learning toolsets get built on top of Spark? Why were they all separate?

MZ: That’s a good question. I think a lot of the reason is probably just because people, you know, just started with a different programming language. A lot of these were started with C++, for example, and of course, they need to run on the GPU using CUDA which is much easier to do from C++ than from Java. But one thing we’re seeing is really good connectors between Spark and these tools. So, for example, TensorFlow has a built-in Spark connector that can be used to get data from Spark and convert it to TFRecords. It also actually connects to HDFS and different sorts of big data file systems. At the same time, in the Spark community, there are packages like deep learning pipelines from Databricks and quite a few other packages as well that let you setup a workflow of steps that include these deep learning engines and Spark processing steps.

“None of the earlier engines [prior to Apache Spark] had focused on establishing a platform and an ecosystem.”

JS: If you were rebuilding these deep learning tools and frameworks, would you recommend that people build it on top of Spark? (i.e. instead of the current approach, of having a tool, but they have an approach of doing distributed computing across GPUs on their own.)

MZ: It’s a good question. I think initially it was easier to write GPU code directly, to use CUDA and C++ and so on. And over time, actually, the community has been adding features to Spark that will make it easier to do that in there. So, there’s definitely been a lot of proposals and design to make GPU a first-class resource. There’s also this effort called Project Hydrogen which is to change the scheduler to support these MPI-like batch jobs. So hopefully it will become a good platform to do that, internally. I think one of the main benefits of that is again for users that they can either program in one programming language, they can learn just one way to deploy and manage clusters and it can do deep learning and the data preprocessing and analytics after that.

JS: That’s great. So, Spark – and Databricks as commercialized Spark – seems to be capable of doing many things in one place. But what is not good at? Can you share some areas where people should not be stretching Spark?

MZ: Definitely. One of the things it doesn’t do, by design, is it doesn’t do transactional workloads where you have fine grained updates. So, even though it might seem like you can store a lot of data in memory and then update it and serve it, it is not really designed for that. It is designed for computations that have a large amount of data in each step. So, it could be streaming large continuous streams, or it could be batch but is it not these point queries.

And I would say the other thing it does not do it is doesn’t have a built-in persistent storage system. It is designed so it’s just a compute engine and you can connect it to different types of storage and that actually makes a lot of sense, especially in the cloud, with separating compute and storage and scaling them independently. But it is different from, you know, something like a database where the storage and compute are co-designed to live together.

JS: That makes sense. What do you think of frameworks like Ray for machine learning?

MZ: There are lot of new frameworks coming out for machine learning and it’s exciting to see the innovation there, both in the programming models, the interface, and how to work with it. So I think Ray has been focused on reinforcement learning which is where one of the main things you have to do is spawn a lot of little independent tasks, so it’s a bit different from a big data framework like Spark where you’re doing one computation on lots of data – these are separate computations that will take different amounts of time, and, as far as I know, users are starting to use that and getting good traction with it. So, it will be interesting to see how these things come about.

I think the thing I’m most interested in, both for Databricks products and for Apache Spark, is just enabling it to be a platform where you can combine the best algorithms, libraries and frameworks and so on, because that’s what seems to be very valuable to end users, is they can orchestrate a workflow and just program it as easily as writing a single machine application where you just import a bunch of libraries.

JS: Now, stepping back, what do you see as the most exciting applications that are happening in AI today?

MZ: Yeah, it depends on how recent. I mean, in the past five years, deep learning is definitely the thing that has changed a lot of what we can do, and, in particular, it has made it much easier to work with unstructured data – so images, text, and so on. So that is pretty exciting.

I think, honestly, for like wide consumption of AI, the cloud computing AI services make it significantly easier. So, I mean, when you’re doing machine learning AI projects, it’s really important to be able to iterate quickly because it’s all about, you know, about experimenting, about finding whether something will work, failing fast if a particular idea doesn’t work. And I think the cloud makes it much easier.

JS: Cloud AI is super exciting, I completely agree. Now, at Stanford, being a professor, you must see a lot of really exciting pieces of work that are going on, both at Stanford and at startups nearby. What are some examples?

MZ: Yeah, there are a lot of different things. One of the things that is really useful for end users is all the work on transfer learning, and in general all the work that lets you get good results with AI using smaller training datasets. There are other approaches as well like weak supervision that do that as well. And the reason that’s important is that for web-scale problems you have lot of labeled data, so for something like web search you can solve it, but for many scientific or business problems you don’t have that, and so, how can you learn from a large dataset that’s not quite in your domain like the web and then apply to something like, say, medical images, where only a few hundred patients have a certain condition so you can’t get a zillion images. So that’s where I’ve seen a lot of exciting stuff.

But yeah, there’s everything from new hardware for machine learning where you throw away the constraints that the computation has to be precise and deterministic, to new applications, to things like, for example security of AI, adversarial examples, verifiability, I think they are all pretty interesting things you can do.

JS: What are some of the most interesting applications you have seen of AI?

MZ: So many different applications to start with. First of all, we’ve seen consumer devices that bring AI into every home, or every phone, or every PC – these have taken off very quickly and it’s something that a large fraction of customers use, so that’s pretty cool to see.

In the business space, probably some of the more exciting things are actually dealing with image data, where, using deep learning and transfer learning, you can actually start to reliably build classifiers for different types of domain data. So, whether it’s maps, understanding satellite images, or even something as simple as people uploading images of a car to a website and you try to give feedback on that so it’s easier to describe it, a lot of these are starting to happen. So, it’s kind of a new class of data, visual data – we couldn’t do that much with it automatically before, and now you can get both like little features and big products that use it.

JS: So what do you see as the future of Databricks itself? What are some of the innovations you are driving?

MZ: Databricks, for people not familiar, we offer basically, a Unified Analytics Platform, where you can work with big data mostly through Apache Spark and collaborate with it in an organization, so you can have different people, developing say notebooks to perform computations, you can have people developing production jobs, you can connect these together into workflows, and so on.

So, we’re doing a lot of things to further expand on that vision. One of the things that we announced recently is what we call machine learning runtime where we have preinstalled versions of popular machine learning libraries like XGBoost or TensorFlow or Horovod on your Databricks cluster, so you can set those up as easily as you can set up as easily as you can setup an Apache Spark cluster in the past. And then another product that we featured a lot at our Spark Summit conference this year is Databricks Delta which is basically a transactional data management layer on top of cloud objects stores that lets us do things like indexing, reliable exactly once stream processing, and so on at very massive scale, and that’s a problem that all our users have, because all our users have to setup a reliable data ingest pipeline.

JS: Who are some of the most exciting customers of Databricks and what are they doing?

MZ: There are a lot of really interesting customers doing pretty cool things. So, at our conference this year, for example, one of the really cool presentations we saw was from Apple. So, Apple’s internal information security group – this is the group that does network monitoring, basically gets hundreds of terabytes of network events per day to process, to detect intrusions and information security problems. They spoke about using Databricks Delta and streaming with Apache Spark to handle all of that – so it’s one of the largest applications people have talked about publicly, and it’s very cool because the whole goal there – it’s kind of an arms race between the security team and attackers – so you really want to be able to design new rules, new measurements and add new data sources quickly. And so, the ease of programming and the ease of collaborating with this team of dozens of people was super important.

We also have some really exciting health and life sciences applications, so some of these are actually starting to discover new drugs that companies can actually productionize to tackle new diseases, and this is all based on large scale genomics and statistical studies.

And there are a lot of more fun applications as well. Like actually the largest video game in the world, League of Legends, they use Databricks and Apache Spark to detect players that are misbehaving or to recommend items to people or things like that. These are all things that were featured at the conference.

JS: If you had one advice to developers and customers using Spark or Databricks, or guidance on what they should learn, what would that be?

MZ: It’s a good question. There are a lot of high-quality training materials online, so I would say definitely look at some of those for your use case and see what other people are doing in that area. The Spark Summit conference is also a good way to see videos and talks and we make all of those available for free, the goal of that is to help and grow the developer community. So, look for someone who is doing similar things and be inspired by that and kinda see what the best practices are around that, because you might see a lot of different options for how to get started and it can be hard to see what the right path is.

JS: One last question – in recent years there’s been a lot of fear, uncertainty and doubt about AI, and a lot of popular press. Now – how real are they, and what do you think people should be thinking?

MZ: That’s a good question. My personal view is – this sort of evil artificial general intelligence stuff – we are very far away from it. And basically, if you don’t believe that, I would say just try doing machine learning tutorials and see how these models break down – you get a sense for how difficult that is.

But there are some real challenges that will come from AI, so I think one of them is the same challenge as with all technology which is, automation – how quickly does it happen. Ultimately, after automation, people usually end up being better off, but it can definitely affect some industries in a pretty bad way and if there is no time for people to transition out, that can be a problem.

I think the other interesting problem there is always a discussion about is basically access to data, privacy, managing the data, algorithmic discrimination – so I think we are still figuring out how to handle that. Companies are doing their best, but there are also many unknowns as to how these techniques will do that. That’s why we’ll see better best practices or regulations and things like that.

JS: Well, thank you Matei, it’s simply amazing to see the innovations you have driven, and looking forward to more to come.

MZ: Thanks for having me.

“When you’re doing machine learning AI projects, it’s really important to be able to iterate quickly because it’s all about experimenting, about finding whether something will work, failing fast if a particular idea doesn’t work.


And I think the cloud makes it much easier.”

We hope you enjoyed this blog post. This being our first episode in the series, we are eager to hear your feedback, so please share your thoughts and ideas below.

The AI / ML Blog Team

Resources

Azure preparedness for Hurricane Florence

As Hurricane Florence continues its journey to the mainland, our thoughts are with those in its path. Please stay safe. We’re actively monitoring Azure infrastructure in the region. We at Microsoft have taken all precautions to protect our customers and our people.

Our datacenters (US East, US East 2, and US Gov Virginia) have been reviewed internally and externally to ensure that we are prepared for this weather event. Our onsite teams are prepared to switch to generators if utility power is unavailable or unreliable. All our emergency operating procedures have been reviewed by our team members across the datacenters, and we are ensuring that our personnel have all necessary supplies throughout the event.

As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication.

Rest assured that Microsoft is focused on the readiness and safety of our teams, as well as our customers’ business interests that rely on our datacenters. 

You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

If there is any change to the situation, we will keep customers informed of Microsoft’s actions through this announcement.

For guidance on Disaster Recovery best practices see references below: 

Playing to the crowd and other social media mandates with Dr. Nancy Baym – Microsoft Research

Nancy Baym

Dr. Nancy Baym, Principal Researcher from Microsoft Research

Episode 41, September 12, 2018

Dr. Nancy Baym is a communication scholar, a Principal Researcher in MSR’s Cambridge, Massachusetts, lab, and something of a cyberculture maven. She’s spent nearly three decades studying how people use communication technologies in their everyday relationships and written several books on the subject. The big take away? Communication technologies may have changed drastically over the years, but human communication itself? Not so much.

Today, Dr. Baym shares her insights on a host of topics ranging from the arduous maintenance requirements of social media, to the dialectic tension between connection and privacy, to the funhouse mirror nature of emerging technologies. She also talks about her new book, Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection, which explores how the internet transformed – for better and worse – the relationship between artists and their fans.

Related:


TRANSCRIPT

Nancy Baym: It’s not just that it’s work, it’s that it’s work that never, ever ends. Because your phone is in your pocket, right? So, you’re sitting at home on a Sunday morning, having a cup of coffee and even if you don’t do it, there’s always the possibility of, “Oh, I could Tweet this out to my followers right now. I could turn this into an Instagram story.” So, the possibility of converting even your most private, intimate moments into fodder for your work life is always there, now.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Dr. Nancy Baym is a communication scholar, a Principal Researcher in MSR’s Cambridge, Massachusetts, lab, and something of a cyberculture maven. She’s spent nearly three decades studying how people use communication technologies in their everyday relationships and written several books on the subject. The big take away? Communication technologies may have changed drastically over the years, but human communication itself? Not so much.

Today, Dr. Baym shares her insights on a host of topics ranging from the arduous maintenance requirements of social media, to the dialectic tension between connection and privacy, to the funhouse mirror nature of emerging technologies. She also talks about her new book, Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection, which explores how the internet transformed – for better and worse – the relationship between artists and their fans. That and much more on this episode of the Microsoft Research Podcast.

Host: Nancy Baym, welcome to the podcast.

Nancy Baym: Nice to be here.

Host: So, you’re a principle researcher at the MSR lab in Cambridge, Massachusetts, not to be confused with the one in Cambridge, England. Give our listeners an overview of the work that goes on in New England and of your work in particular. What are the big issues you’re looking at? Why is the work important? Basically, what gets you up in the morning?

Nancy Baym: So, the lab in New England is one of Microsoft’s smaller researcher labs. We’re very interdisciplinary, so, we have people in my basic area which is social media and social issues around technology from humanistic and social scientific perspectives. And we have that alongside people working on machine learning and artificial intelligence, people working on economics, people working on cryptography, people working on math and complexity theory, people doing algorithmic game theory, and then we also have a bioinformatics and medicine component to this program also. So, we’re really interested in getting people from very different perspectives together and listening to each other and seeing what kinds of new ideas get sparked when you get people from radically different disciplines together in the same environment and you give them long periods of time to get to know one another and get exposed to the kinds of work that they do. So, that’s the lab as a whole. My group is… we call ourselves the Social Media Collective, which is a, sort of, informal name for it. It’s not an official title but it’s sort of an affectionate one. There are three core people here in our New England lab, and then, which would be me, Mary Gray and Tarleton Gillespie, and then we have a postdoc and we have, in the summer, PhD interns, we have a research assistant, and we’re all interested in questions around how people use technologies, the kinds of work that people do through technologies, the kinds of work that technologies create for people, and the ways that that affects them, their identities, their relationships, their communities, societies as a whole.

Host: You know, as you talk about the types of researchers that you have there, I wonder, is New England unique among the labs at Microsoft?

Nancy Baym: I think we are, in that we are more interdisciplinary than many of them. I mean our Redmond lab, obviously, has got people from a huge range of disciplines, but it’s also got a huge number of people, whereas we’re a much smaller group. We’re on one floor of a building and there are, you know, anywhere from twenty to fifty of us, depending on how many visitors are in the lab and how many interns are around or what not, but that’s still a really small fraction of the Redmond group. So, I think anybody in a particular field finds themselves with many fewer colleagues from their own field relative to their colleagues as a whole in this lab. Whereas, I think most of our labs are dominated much more by people from computer science. Obviously, computer science is well-represented here, but we have a number of other fields as well. So, I think that foregrounding of interdisciplinarity is unique to this lab.

Host: That’s great. So, the social science research in the context of social computing and social media, it’s an interesting take on research in general at Microsoft, which is a high-tech company. How do you think the work that you do informs the broader work of Microsoft Research and Microsoft in general?

Nancy Baym: I would like to think that the kinds of work that I do, and that my colleagues are doing, are helping the company, and technology companies in general, think in more sophisticated ways about the ways that the technologies that we create get taken up and get used and with what consequences. I think that people who build technologies, they really want to help people do things. And they’re focused on that mission. And it can be difficult to think about, what are all the ways that that might get taken up besides the way that I imagine it will get taken up, besides the purpose that I’m designing it for? So, in some sense, I think part of our group is here to say, here’s some unexpected things you might not be thinking about. Here’s some consequences, or in the case of my own work, I’d like to think about the ways that technologies are often pushing people toward more connection and more time with others and more engagement and more sharing and more openness. And yet, people have very strong needs for privacy and for distance and for boundaries and what would it mean, for example, to think about how we could design technologies that helped people draw boundaries more efficiently rather than technologies that were pushing them toward openness all the time?

Host: I love that. And I’m going to circle back, in a bit, to some of those issues of designing for dialectic and some of the issues around unintended consequences. But first, I want to talk about a couple books you wrote. Before we talk about your newest book, I want to spend a little time talking about another book you wrote called Personal Connections in the Digital Age. And in it, you challenge conventional wisdom that tends to blame new technologies for what we might call old problems. Talk a little bit about Personal Connections in the Digital Age.

Nancy Baym: That book came out of a course that I had been teaching for, oh gosh, fifteen, sixteen, seventeen years, something like that, about communication and the internet, and one of the things that tends to come up is just what you’re talking about. This idea that people tend to receive new technologies as though this is the first time these things have ever been disrupted. So, part of what that book tries to do is to show how the way that people think and talk about the internet has these very long histories in how people think and talk about other communication technologies that have come before. So, for example, when the telephone was invented, there was a lot of concern that the telephone was going to lead to social disengagement, particularly among women, who would spend all the time talking on the phone and would stop voting. Um… (laughter) which doesn’t sound all that different from some contemporary ways that people talk about phones! Only now it’s the cell phones that are going to cause all that trouble. It’s that, but it’s also questions around things like, how do we present ourselves online? How do we come to understand who other people are online? How does language change when it’s used online? How do we build relationships with other people? How do we maintain relationships with people who we may have met offline? And also, how do communities and social networks form and get maintained through these communication technologies? So, it’s a really broad sweep. I think of that book as sort of the “one stop shop” for everything you need to know about personal connections in the digital age. If you just want to dive in and have a nice little compact introduction to the topic.

Host: Right. There are other researchers looking into these kinds of things as well. And is your work sort of dovetailing with those findings in that area of personal relationships online?

Nancy Baym: Yeah, yeah. There’s quite a bit of work in that field. And I would say that, for the most part, the body of work which I review pretty comprehensively in Personal Connections in the Digital Age tends to show this much more nuanced, balanced, “for every good thing that happens, something bad happens,” and for all of the sort of mythologies about “its destroying children” or “you can’t trust people you meet online,” or “people aren’t their real selves” or even the idea that there’s something called “real life,” which is separate from what happens on the internet, the empirical evidence from research tends to show that, in fact, online interaction is really deeply interwoven with all of our other forms of communication.

Host: I think you used the word “moral panic” which happens when a new technology hits the scene, and we’re all convinced that it’s going to ruin “kids today.” They won’t have manners or boundaries or privacy or self-control, and it’s all technology’s fault. So that’s cool that you have a kind of answer to that in that book. Let’s talk about your new book which is super fascinating: Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection. Tell us how this book came about and what was your motivation for writing it?

Nancy Baym: So, this book is the result of many years of work, but it came to fruition because I had done some early work about online fan community, particularly soap opera fans, and how they formed community in the early 1990s. And then, at some point, I got really interested in what music fans were doing online and so I started a blog where I was posting about music fans and other kinds of fans and the kinds of audience activities that people were doing online and how that was sort of messing with relationships between cultural producers and audiences. And that led to my being invited to speak at music industry events. And what I was seeing there was a lot of people with expertise saying things like, “The problem is, of course, that people are not buying music anymore, so the solution to this problem is to use social media to connect with your audience because if you can connect with them, and you can engage them, then you can monetize them.” And then I was seeing the musicians ask questions, and the kinds of questions that they were asking seemed very out-of-step with the kind of advice that they were being given. So, they would be asking questions like, do I have to use all of the sites? How do I know which ones to use? So, I got really interested in this question, of sort of, what, from the point of view from these people who were being told that their livelihood depends on creating some kind of new social relationship using these media with audiences, what is this call to connect and engage really about? What does it feel like to live with that? What are the issues it raises? Where did it come from? And then this turned into a much larger-scoped project thinking about musicians as a very specific case, but one with tremendous resonance for the ways that so many workers in a huge variety of fields now, including research, feel compelled to maintain some kind of visible, public persona that engages with and courts an audience so that when our next paper comes out, or our next record drops, or our next film is released or our next podcast comes out, the audience is already there and interested and curious and ready for it.

Host: Well let me interject with a question based on what you said earlier. How does that necessarily translate into monetization? I can see it translating into relationship and, you know, followership, but is there any evidence to support the you know…?

Nancy Baym: It’s magic, Gretchen, magic!

Host: OK. I thought so! I knew it!

Nancy Baym: You know, I work with economists and I keep saying, “Guys, let’s look at this. This is such a great research problem.” Is it true, right? Because you will certainly hear from people who work at labels or work in management who will say, “We see that our artists who engage more do better.” But in terms of any large scale “what works for which artists when?” and “does it really work across samples?” is, the million-dollar question that you just asked, is does it actually work? And I don’t know that we know the answer to that question. For some individuals, some of the time, yes. For the masses, reliably, we don’t know.

Host: Well and the other thing is, being told that you need to have this social media presence. It’s work, you know?

Nancy Baym: That’s exactly the point of the book, yeah. And it’s not just that it’s work, it’s that it’s work that never, ever ends. Because your phone is in your pocket, right? So, you’re sitting at home on a Sunday morning, having a cup of coffee, and even if you don’t do it, there’s always the possibility of, “Oh, I could tweet this out to my followers right now. I could turn this into an Instagram story.” So, the, the possibility of converting even your most private, intimate moments into fodder for your work life is always there, now. And the promise is, “Oh, if you get a presence, then magic will happen.” But first of all, it’s a lot of work to even create the presence and then to maintain it, you have to sell your personality now. Not just your stuff. You have to be about who you are now and make that identity accessible and engaging and what not. And yet it’s not totally clear that that’s, in fact, what audiences want. Or if it is what audiences want, which audiences and for which kinds of products?

(music plays)

Host: Well, let’s get back to the book a little bit. In one chapter, there’s a subsection called How Music Fans came to Rule the Internet. So, Nancy, how did music fans come to rule the internet?

Nancy Baym: So, the argument that I make in that chapter is that from the earliest, earliest days of the internet, music fans, and fans in general, were not just using the internet for their fandom, but were people who were also actively involved in creating the internet and creating social computing. So, I don’t want to say that music fans are the only people who were doing this, because they weren’t, but, from the very beginnings of online interaction, in like 1970, you already had the very people who are inventing the concept of a mailing list, at the same time saying, “Hey, we could use one of these to exchange Grateful Dead tickets, ‘cause I have some extra ones and I know there’s some other people in this building who might want them.” So, you have people at Stanford’s Artificial Intelligence laboratory in the very beginning of the 1970s saying, “Hey, we could use this enormous amount of computing power that we’ve got to digitize The Grateful Dead lyrics.” You have community computing projects like Community Memory being launched in the Bay Area putting their first terminal in a record store as a means of bringing together community. And then, from those early, early moments throughout, you see over and over and over again, music fans creating different forms of online community that then end up driving the way that the internet develops, peer-to-peer file sharing being one really clear example of a case where music fans helped to develop a technology to serve their needs, and by virtue of the success of that technology, ended up changing not just the internet, but industries that were organized around distributing cultural materials.

Host: One of the reviewers of Playing to the Crowd, and these reviews tend to be glowing, right? But he said, “It’ll change the way we think about music, technology and people.” So, even if it didn’t change everything about the way we think about music technology and people, what kinds of sort of “ah-ha findings” might people expect to find in the book?

Nancy Baym: I think one of the big ah-has is the extent to which music is a form of communication which has become co-opted, in so many ways, by commercial markets, and alongside that, the ways in which personal relationships and personal communication, have also become co-opted by commercial markets. Think about the ways that communication platforms monetize our everyday, friendly interaction through advertising. And the way that these parallel movements of music and relational communication from purely social activities to social activities that are permeated by commercial markets raises dialectic tensions that people then have to deal with as they’re continually navigating moving between people and events and circumstances and moments in a world that is so infused by technology and where our relationships are infused by technology.

Host: So, you’ve used the word “dialectic” in the context of computer interface design, and talked about the importance of designing for dialectic. Talk about what you mean by that and what kinds of questions arise for a developer or a designer with that mind set?

Nancy Baym: So, “dialectic” is one of the most important theoretical concepts to me when I think about people’s communication and people’s relationships in this project, but, in general, it’s a concept that I come back to over and over and over, and the idea is that we always have competing impulses that are both valid, and which we have to find balance between. So, a very common dialectic in interpersonal relationships is the desire to, on the one hand, be connected to others, and on the other, to be autonomous from others. So, we have that push and pull between “I want us to be part of each other’s lives all the time, and also leave me alone to make my own decisions.” (laughter) So that dialectic tension is not that one is right and one is wrong. It’s that that, and, as some of the theorists I cite on this argue, probably infinite dialectic tensions between “I want this, but I also want that” and it’s the opposite, right? And so, if we think about social interaction, instead of it being some sort of linear model where we start at point A with somebody and we move onto B and then C and then D, if we think of it instead as, even as we’re moving from A to B to C, that’s a tightrope. But at any given moment we can be toppling into one side or the other if we’re not balancing them carefully. So, if we think about a lot of the communication technologies that are available to us right now, they are founded, often quite explicitly, on a model of openness and connection and sharing. So, those are really, really valuable positions. But they’re also ends of dialectics that have opposite ends that are also very valid. So, all of these ways in which we’re pushed to be more open, more connected, to share more things, they are actually always in conflict within us with desires to be protective of other people or protective of ourselves, to have some distance from other people, to have autonomy. And to be able to have boundaries that separate us from others, as well as boundaries that connect us to one another. So, my question for designers is, how could we design in ways that make it easier for people to adjust those balances? In a way, you could sort of think about it as, what if we made the tightrope, you know, thicker so that it were easier for people to balance on, and you didn’t need to be so good at it, to make it work moment-to-moment?

Host: You know, everything you’ve just said makes me think of, you know, say, someone who wants to get involved in entertainment, in some way, and one of the plums of that is being famous, right? And then you find…

Nancy Baym: Until they are.

Host: …Until you are… that you don’t have control over all the attention you get and so that dialectic of “I want people to notice me/I want people to leave me alone” becomes wildly exacerbated there. But I think, you know, we all see “over-sharers,” as my daughter calls, them on social media. It’s like keep looking at me all the time. It’s like too much information. Have some privacy in your life…

Nancy Baym: Well you know, but that’s a great case, because I would say too much information is not actually a property of information, or of the person sending that information, it’s a property of the person receiving that information. Because, in fact, for some, it’s not going to be too much information. For some, it’s going to be exactly the right amount of information. So, I think of the example, of, from my point of view, a number of people who are parents of young children post much too much information on social networks. In particular, I’m really, really turned off by hearing about the details of their trivial illnesses that they’re going through at any given moment. You know, I mean if they got a real illness, of course I want to hear about it, but if you know, they got a fever this week and they’re just feeling a little sick, I don’t really need daily updates on their temperature, for instance. Um… on the other hand, I look at that, and I say, “Oh, too much information.” But then I say, “I’m not the audience for that.” They’ve got 500-600 friends. They probably put that there for grandma and the cousins who actually really do care. And I’m just not the audience. So, it’s not that that’s too much information. It’s that that information wasn’t meant for me. And instead of blaming them for having posted it, maybe I should just look away and move on to the next item in my feed. That’s ok, too. I’m sure that some of the things that I share strike some people as too much information but then, I’ll tell you what, some of the things that post that I think of as too much information, those are often the ones that people will later, in other contexts, say, “Oh my gosh, it meant so much to me that you posted about… whatever.” So, you know, we can’t just make these judgements about the content of what other people are producing without understanding the contexts in which it’s being received, and by whom.

Host: That is such a great reminder to us to have grace.

Nancy Baym: Grace for other people, that too, yeah.

Host: You’ve been watching, studying and writing about cyberculture for a long time. Going back a ways, what did you see, or even foresee, when you started doing this research and what if anything has surprised you along the way?

Nancy Baym: Well, it’s a funny thing. I mean, when I started doing this research, it was 1991. And the landscape has changed so much since then, so that the kinds of things that I could get away with being an insightful scholar for saying in 1991 are practically laughable now, because people just didn’t understand, at that time, that these technologies were actually going to be really socially useful. That people were going to use these technologies to present themselves to others, to form relationships, to build communities, that they were going to change the way audiences engaged, that they were going to change politics, that they were going to change so many practices of everyday life. And I think that those of us who were involved in cyberculture early, whether it was as researchers or just participants, could see that what was happening there was going to become something bigger than it was in those early days.

(music plays)

Host: I ask all of the researchers that come on the podcast some version of the question, “Is there anything that keeps you up at night?” To some degree, I think your work addresses that. You know, what ought we to be kept up at night about, and how, how ought we to address it? Is there anything that keeps you up at night, or anything that should keep us up at night that we should be thinking about critically as we’re in this landscape now?

Nancy Baym: Oh gosh, do any of us sleep anymore at all? (laughter) I mean I think what keeps me up nights is thinking, is it still ok to study the personal and the ordinary when it feels like we’re in such in extraordinary, tumultuous and frightening times, uh, nationally and globally? And I guess what I keep coming back to, when I’m lying awake at 4 in the morning saying, “Oh, maybe I just need to start studying social movements and give up on this whole interpersonal stuff.” And then I say to myself, “Wait a minute. The reason that we’re having so much trouble right now, at its heart, is that people are not having grace in their relations with one another,” to go back to your phrase. That what we really, really need right now more than anything is to be reconnected to our capacity for human connection with others. And so, in that sense, then, I kind of put myself to sleep by saying, “OK, there’s nothing more important than actual human connection and respect for one another.” And so that’s what I’m trying to foster in my work. So, I’m just going to call that my part and write a check for some of those other causes I can’t contribute to directly.

Host: I, I love that answer. And that actually leads beautifully into another question which is that your social science work at MSR is unique at industrial research labs. And I would call Microsoft, still, an industrial, you know, situation.

Nancy Baym: Definitely.

Host: So, you get to study unique and challenging research problems.

Nancy Baym: I have the best job in the world.

Host: No, I do, but you got a good one. Because I get to talk to people like you. But what do you think compels a company like Microsoft, perhaps somewhat uniquely, to encourage researchers like you to study and publish the things you do? What’s in it for them?

Nancy Baym: My lab director, Jennifer Chayes, talks about it as being like a portfolio which I think is, is a great way to think about it. So, you have this cast of researchers in your portfolio and each of them is following their own path to satisfying their curiosity and by having some of those people in that portfolio who really understand people, who really understand the way that technologies play out in ordinary people’s everyday lives and lived experiences, there may be moments where that’s exactly the stock you need at that moment. That’s the one that’s inflating and that’s the expertise that you need. So, given that we’re such a huge company, and that we have so many researchers studying so many topics, and that computing is completely infused with the social world now… I mean, if we think about the fact that we’ve shifted to so much cloud and that clouds are inherently social in the sense that it’s not on your private device, you have to trust others to store your data, and so many things are now shared that used to be individualized in computing. So, if computing is infused with the social, then it just doesn’t even really make sense for a tech company to not have researchers who understand the social, and who are studying the social, and who are on hand with that kind of expertise.

Host: As we close, Nancy, what advice would you give to aspiring researchers, maybe talking to your 25-year-old self, who might be interested in entering this field now, which is radically different from where it was when you started looking at it. What, what would you say to people that might be interested in this?

Nancy Baym: I would say, remember that there is well over a hundred years of social theory out there right now, and the fact that we have new communication technologies does not mean that people have started from scratch in their communication, and that we need to start from scratch in making sense of it. I think it’s more important than ever, when we’re thinking about new communication technologies, to understand communication behavior and the way that communication works, because that has not fundamentally transformed. The media through which we’ve used it has, but the way communication works to build identity, community, relationships, that has not fundamentally, magically, become something different. The same kind of interpersonal dynamics are still at play in many of these things. I think of the internet and communication technologies as being like funhouse mirrors. Where some phenomena get made huge and others get made small, so there’s a lot of distortion that goes on. But nothing entirely new is reflected that never existed before. So, it’s really important to understand the precedents for what you’re seeing, both in terms of theory and similar phenomena that might have occurred in earlier incarnations, in order to be able to really understand what you’re seeing in terms of both what is new, but also what’s not new. Because otherwise, what I see a lot in young scholarship is, “Look at this amazing thing people are doing in this platform with this thingy.” And it is really interesting, but it also actually looks a whole lot like what people were doing on this other platform in 1992, which also kind of looks a lot like what people were doing with ‘zines in the 1920s. And if we want to make arguments about what what’s new and what’s changing because of these things, it’s so important that we understand what’s not new and what these things are not changing.

(music plays)

Host: Nancy Baym, it’s been an absolute delight talking to you today. I’m so glad you took time to talk to us.

Nancy Baym: Alrighty, bye.

To learn more about Dr. Nancy Baym, and how social science scholars are helping real people understand and navigate the digital world, visit Microsoft.com/research.

How PhotoDNA for Video is being used to fight online child exploitation – On the Issues

PhotoDNA has also enabled content providers to remove millions of illegal photographs from the internet; helped convict child sexual predators; and, in some cases, helped law enforcement rescue potential victims before they were physically harmed.

In the meantime, though, the volume of child sexual exploitation material being shared in videos instead of still images has ballooned. The number of suspected videos reported to the CyberTipline managed by the National Center for Missing and Exploited Children (NCMEC) in the United States increased tenfold from 312,000 in 2015 to 3.5 million in 2017. As required by federal law, Microsoft reports all instances of known child sexual abuse material to NCMEC.

Microsoft has long been committed to protecting its customers from illegal content on its products and services, and applying technology the company already created to combating this growth in illegal videos was a logical next step.

“Child exploitation video content is a crime scene. After exploring the development of new technology and testing other tools, we determined that the existing, widely used PhotoDNA technology could also be used to effectively address video,” says Courtney Gregoire, Assistant General Counsel with Microsoft’s Digital Crimes Unit. “We don’t want this illegal content shared on our products and services. And we want to put the PhotoDNA tool in as many hands as possible to help stop the re-victimization of children that occurs every time a video appears again online.”

A recent survey of survivors of child sexual abuse from the Canadian Centre for Child Protection found that the online sharing of images and videos documenting crimes committed against them intensified feelings of shame, humiliation, vulnerability and powerlessness. As one survivor was quoted in the report: “The abuse stops and at some point also the fear for abuse; the fear for the material never ends.”

Graphic showing how DNA for Video creates hashes from video frames and compares to known images

The original PhotoDNA helps put a stop to this online recirculation by creating a “hash” or digital signature of an image: converting it into a black-and-white format, dividing it into squares and quantifying that shading. It does not employ facial recognition technology, nor can it identify a person or object in the image. It compares an image’s hash against a database of images that watchdog organizations and companies have already identified as illegal. IWF, which has been compiling a reference database of PhotoDNA signatures, now has 300,000 hashes of known child sexual exploitation materials.

PhotoDNA for Video breaks down a video into key frames and essentially creates hashes for those screenshots. In the same way that PhotoDNA can match an image that has been altered to avoid detection, PhotoDNA for Video can find child sexual exploitation content that’s been edited or spliced into a video that might otherwise appear harmless.

“When people embed illegal videos in other videos or try to hide them in other ways, PhotoDNA for Video can still find it. It only takes a hash from a single frame to create a match,” says Katrina Lyon-Smith, senior technical program manager who has implemented the use of PhotoDNA for Video on Microsoft’s own services.

Beyond our four walls: How Microsoft is accelerating sustainability progress – Microsoft on the Issues

Our planet is changing — sea levels are rising, weather is becoming more extreme and our natural resources are being depleted faster than the earth’s ecosystems can restore them. These changes pose serious threats to the future of all life on our tiny blue dot, and they challenge us to find new solutions, work together and leverage the diversity of human potential to help right the course.

The good news is that progress is being made across the globe, and non-state actors, from cities to companies to individual citizens, are setting bold commitments and accelerating their work on climate change. But it’s also clear that we all must raise our ambitions, couple that with action and work more swiftly than ever.

At Microsoft, we fully understand and embrace this challenge. That is why, this week, at the Global Climate Action Summit, Microsoft is sharing our vision for a sustainable future — one where everyone everywhere is experiencing and deploying the power of technology to help address climate change and build a more resilient future. We are optimistic about what progress can be made because we are already seeing results of this technology-first enablement approach.

Today, we are unveiling five new tools, partnerships and the results of pilot projects that are already reducing emissions in manufacturing and advancing environmental research and showing immense potential to disrupt the building and energy sectors for a lower-emission future.

These include:

  • A new, open-source tool to find, use and incentivize lower-carbon building materials: To create low-carbon buildings, we need to choose low-carbon building materials. But right now, choosing these materials is challenging because the data is not readily available and what we do have lacks transparency to ensure it’s accurate. We are the first large corporate user of a new tool to track the carbon emissions of raw building materials, introduced by Skanska and supported by the University of Washington Carbon Leadership Forum, Interface and C-Change Labs, called the Embodied Carbon Calculator for Construction (EC3). We’ll use this in our new campus remodel. Our early estimates are that a low-carbon building in Seattle has approximately half the carbon emissions of an average building, so this could have a substantial impact on reducing carbon emissions in our remodel and eventually the entire built environment. We’re proud to not only be piloting it, but that this open-source tool is also running on Microsoft Azure.
  • The results of a “factory of the future” and solar-panel deployment at one of our largest suppliers of China: We partnered with our supplier’s management team to develop and install an energy-smart building solution running on Microsoft Azure to monitor and address issues as they emerge, saving energy and money. Additionally, Microsoft funded a solar panel installation, which generated more than 250,000 kilowatt-hours of electricity in the past fiscal year. This integrated solution is estimated to reduce emissions by approximately 3 million pounds a year.

The successful pilot of a grid-interactive energy storage battery: ​Solving storage is a critical piece of transforming the energy sector. That is why we’re excited to share the results of a new pilot in Virginia, in partnership with Eaton and PJM Interconnection. We used a battery that typically sits in our datacenter as a backup system, hooked it up to the grid to receive signals about when to take in power, when to store it and when to discharge to support the reliability of the system and integration of renewable energy. With thousands of batteries as part of our backup power systems at our datacenters, this pilot has the potential to rapidly scale storage solutions, allowing datacenters to smooth out the unpredictability of wind and solar.

  • New grantees and results from our AI for Earth program: Since we first introduced this grant, training and innovation program last year, we’ve experienced 200 percent growth. We are now supporting 137 grantees in more than 40 countries around the world, as well as doubling the number of larger featured projects we support. We’ve seen early results, too, allowing many people outside the grant program to benefit from our work, allowing us to process more than 10 trillion pixels in ten minutes and less than $50.
  • New LinkedIn online training module for sustainability, the Sustainable Learning Path: LinkedIn is providing new training courses to enable people everywhere to learn and gain job skills to participate in the clean energy economy and low-carbon future. The Sustainable Learning Path offers six hours of expert-created content; initial courses include an overview of sustainability strategies and introductions to LEED credentials and sustainable design. All six courses are unlocked until the end of October, in celebration of the Global Climate Action Summit, and can be accessed here.

While these are just the first proof points of the potential of technology to accelerate the pace of change beyond our four walls, they build on decades of sustainability progress within our operations.  These include operating 100 percent carbon neutral since 2012, purchasing more than 1 gigawatt of renewable energy on three continents, committing to reduce our operational carbon footprint by 75 percent by 2030, and a host of other initiatives. As meaningful as this operational progress is, we know it’s not enough. As a global technology company, we have a responsibility and a tremendous opportunity to help change the course of our planet.

As we look to the future, we’ll realize this opportunity in a few ways. We will use our operations as a test bed for innovation and share new insights about what works. We will work with our customers and suppliers to drive efficiencies that lead to tangible carbon reductions. We will continue to increase access to cloud and AI tools, especially among climate researchers and conservation groups, and work together to develop new tools that can be deployed by others in the field.

We are not naïve. Technology is not a panacea. Time and resources are short, and the task immense. But we refuse to believe that it is insurmountable or too late to build a better future, and we are convinced that technology can play a pivotal role in enabling that progress.

That optimism is borne out of our experience, lessons learned and the drive to create a better future that is core to Microsoft. At GCAS, I will be joined by 10 Microsoft and LinkedIn sustainability leaders, who will be sharing more details about this approach and the news outlined at panel sessions throughout the week, showcasing some of our technology solutions at events we are hosting and supporting the effort with more than 50 employees volunteering their time at GCAS. We are also proud to be an official sponsor of GCAS.

You can find our Microsoft delegation at the following events during the summit, as well as many others throughout the week. And we encourage you to follow us @Microsoft_Green for a full view of our conference activities and engagements, and the official hashtags for news of the event at #GCAS2018 #StepUp 2018.

Find Microsoft at the Global Climate Action Summit — event highlights

September 11, 8:00 a.m. PT: Sustainable Food Services Panel (LinkedIn hosting)

September 12, 9:00 a.m. PT: We Are Still In Forum

 September 12, 2:00 p.m. PT: “Energy, Transportation & Innovation – a Conversation with U.S. Climate Alliance Governors & Business Leaders” (Microsoft hosting)

  • Speaker: Shelley McKinley, General Manager for Technology and Civic Responsibility at Microsoft
  • Watch the livestream: https://aka.ms/CEO_Governors_Live and use #USCAxGCAS to submit questions on Twitter during the event

September 13, 9:00 a.m. PT: World Economic Forum: 4th IR for Earth

  • Speaker: Lucas Joppa, Chief Environmental Officer, Microsoft

September 13, 1:30 p.m. PT: GCAS Breakout Session – “What We Eat and How It’s Grown: Food Systems and Climate”

September 13, 3:00 p.m. PT: Meeting the Paris Goal: Strategies for Carbon Neutrality (Microsoft hosting)

  • Speaker: Elizabeth Willmott, Carbon Program Lead

September 13, 6 p.m. to 8 p.m. PT: We Are Still In Reception at Microsoft

September 14, 8:30 a.m. PT: Clean Energy in Emerging Markets (Microsoft hosting)

September 14, 11:00 a.m. PT: Climate Action Career Fair (LinkedIn hosting)

  • Speaker: Lucas Joppa, Chief Environmental Officer

Tags:

Why Would Prosthetic Arms Need to See or Connect to Cloud AI?

Based on “Connected Arms”, a keynote talk at the O’Reilly AI Conference delivered by Joseph Sirosh, CTO for AI at Microsoft. Content reposted from this O’Reilly Media website.

There are over 1 million new amputees every year, i.e. one every 30 seconds – a truly shocking statistic.

The World Health Organization estimates that between 30 to 100 million people around the world are living with limb loss today. Unfortunately, only 5-15% of this population has access to prosthetic devices.

Although prostheses have been around since ancient times, their successful use has been severely limited for millennia by several factors, with cost being the major one. Although it is possible to get sophisticated bionic arms today, the cost of such devices runs into tens of thousands of dollars. These devices are just not widely available today. What’s more, having these devices interface satisfactorily with the human body has been a massive issue, partly due to the challenges of working with the human nervous system. Such devices generally need to be tailored to work with each individual’s nervous system, a process that often requires expensive surgery.

Is it possible for a new generation of human beings to finally help us break through these long-standing barriers?

Can prosthetic devices learn to adapt to us, as opposed to the other way around?

A Personalized Prosthetic Arm for $100?

In his talk, Joseph informs us about how, using the combination of:

  • Low-cost off-the-shelf electronics,
  • 3D-printing, and
  • Cloud AI, for intelligent, learned, personalized behavior,

it is now becoming possible to deliver prosthetic arms at a price point of around $100.

Joseph takes the smartARM as an example of such a breakthrough device. A prototype built by two undergraduate students from Canada who recently won the first prize in Microsoft’s Imagine Cup, the smartARM is 3D-printed, has a camera in the palm of its hand and is connected to the cloud. The magic is in the cloud, where a computer vision service recognizes the objects seen by the camera. Deep learning algorithms then generate the precise finger movements needed to grasp the object near the arm. Essentially, the cloud vision service classifies the object and generates the right grip or action, such as a pincer action to pick up a bunch of keys on a ring, or a palmar action to pick up a wineglass. The grip itself is a learned behavior which can be trained and customized.

The user of the prosthetic arm triggers the grip (or its release) by flexing any muscle of their choice on their body, for instance, their upper arm muscle. A myoelectric sensor located in a band that is strapped over that muscle detects the signal and triggers the grip or its release.

Simple, Adaptable Architecture

The architecture of this grip classification solution is shown below. The input to the raspberry pi on the smartARM comes from camera and the muscle sensor. These inputs are sent to the Azure Custom Vision Service, an API in the cloud which has been trained on grip classifications and is able to output the appropriate grip. This grip is sent back to an Arduino board in the smartARM which can then trigger the servo motors that realize that grip in the physical world, i.e. as soon as the smartARM gets the signal to do so from the muscle sensor.

This is an adaptable architecture. It can be customized to the kinds of movements you want this arm to generate. For instance, the specific individual using this prosthetic can customize the grips for the objects in their daily life which are the ones they care the most about. The muscle sensor -based trigger could be replaced with a speech trigger, if so desired.

Summary

AI is empowering a new generation of developers to explore all sorts of novel ideas and mashups. Through his talk on “Connected Arms”, Joseph shows us how the future of prosthetic devices can be transformed by the power of the cloud and AI. Imagine a world in which all future assistive devices are empowered with AI in this fashion. Devices would adapt to individuals, rather than the other way around. Assistive devices will become more affordable, intelligent, cloud-powered and personalized.

Cloud AI is letting us build unexpected things that we would scarcely have imagined.

Such as like an arm that can see.

The AI / ML Blog Team