Tag Archives: episode

AI, Azure and the future of healthcare with Dr. Peter Lee – Microsoft Research

headshot of Peter Lee for the Microsoft Research Podcast

Episode 109 | March 4, 2020

Over the past decade, the healthcare industry has undergone a series of technological changes in an effort to modernize it and bring it into the digital world, but the call for innovation persists. One person answering that call is Dr. Peter Lee, Corporate Vice President of Microsoft Healthcare, a new organization dedicated to accelerating healthcare innovation through AI and cloud computing.

Today, Dr. Lee talks about how MSR’s advances in healthcare technology are impacting the business of Microsoft Healthcare. He also explains how promising innovations like precision medicine, conversational chatbots and Azure’s API for data interoperability may make healthcare better and more efficient in the future.

Related:


Transcript

Peter Lee: In tech industry terms, you know, if the last decade was about digitizing healthcare, the next decade is about making all that digital data good for something, and that good for something is going to depend on data flowing where it needs to flow at the right time.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Over the past decade, the healthcare industry has undergone a series of technological changes in an effort to modernize it and bring it into the digital world, but the call for innovation persists. One person answering that call is Dr. Peter Lee, Corporate Vice President of Microsoft Healthcare, a new organization dedicated to accelerating healthcare innovation through AI and cloud computing.

Today, Dr. Lee talks about how MSR’s advances in healthcare technology are impacting the business of Microsoft Healthcare. He also explains how promising innovations like precision medicine, conversational chatbots and Azure’s API for data interoperability may make healthcare better and more efficient in the future. That and much more on this episode of the Microsoft Research Podcast.

(music plays)

Host: Peter Lee, welcome to the podcast!

Peter Lee: Thank you. It’s great to be here.

Host: So you’re a Microsoft Corporate Vice President and head of a relatively new organization here called Microsoft Healthcare. Let’s start by situating that within the larger scope of Microsoft Research and Microsoft writ large. What is Microsoft Healthcare, why was it formed, and what do you hope to do with it?

Peter Lee: It’s such a great question because when, we were first asked to take this on, it was confusing to me! Healthcare is such a gigantic business in Microsoft. You know, the number that really gets me is, Microsoft has commercial contracts with almost 169,000 healthcare organizations around the world.

Host: Wow.

Peter Lee: I mean, it’s just massive. Basically, anything from a one-nurse clinic in Nairobi, Kenya, to Kaiser Permanente or United Healthcare, and everything in-between. And so it was confusing to try to understand, what is Satya Nadella thinking to ask a “research-y” organization to take this on? But, you know, the future of healthcare is so vibrant and dynamic right now, and is so dependent on AI, on Cloud computing, big data, I think he was really wanting us to think about that future.

Host: Let’s situate you.

Peter Lee: Okay.

Host: You cross a lot of boundaries from pure to applied research, computer science to medicine. You’ve been head of Carnegie Mellon University’s computer science department, but you were also an office director at DARPA, which is the poster child for applied research. You’re an ACM fellow and on the board of directors of the Allen Institute for AI, but you’re also a member of the National Academy of Medicine, fairly newly minted as I understand?

Peter Lee: Right, just this year.

Host: And on the board of Kaiser Permanente’s School of Medicine. So, I’d ask you what gets you up in the morning, but it seems like you never go to bed So instead, describe what you do for a living, Peter! How you choose what hat to wear in the morning and what’s a typical day in your life look like?

Peter Lee: Well, you know, this was never my plan. I just love research, and thinking hard about problems, being around other smart people and thinking hard about problems, getting real depth of understanding. That’s what gets me up. But I think the world today, what’s so exciting about it for anyone with the research gene, is that research, in a variety of areas, has become so important to practical, everyday life. It’s become important to Microsoft’s business. Not just Microsoft, but all of our competitors. And so I just feel like I’m in a lucky position, as well as a lot of my colleagues, I don’t think any of us started with that idea. We just wanted to do research and now we’re finding ourselves sort of in the middle of things.

Host: Right. Well, talk a little bit more about computer science and medicine. How have you moved from one to the other, and how do you kind of envision yourself in this arena?

Peter Lee: Well, my joke here is, these were changes that, actually, Satya Nadella forced me to make! And it’s a little bit of a joke because I was actually honored that he would think of me this way, but it was also painful because I was in a comfort zone just doing my own research, leading research teams, and then, you know, Satya Nadella becomes the CEO, Harry Shum comes on board to drive innovation, and I get asked to think about new ways to take research ideas and get them out into the world. And then, three years after that, I get asked to think about the same thing for healthcare. And each one of those, to my mind, are examples of this concept that Satya Nadella likes to talk about, “growth mindset.” I joke that growth mindset is actually a euphemism because each time you’re asked to make these changes, you just get this feeling of dread. You might have a minute where you’re feeling honored that someone would ask you something, but then…

Host: Oh, no! I’ve got to do it now!

Peter Lee: …and boy, I was, you know, on a roll in what I was doing before, and you do spend some time feeling sorry for yourself… but when you work through those moments, you find that you do have those periods in your life where you grow a lot. And my immersion with so many great people in healthcare over the last three or four years has been one of those big growth periods. And to be recognized, then, let’s say, by the National Academies is sort of validation of that.

Host: All right, so rewind just a little bit and talk about that space you were in just before you got into the healthcare situation. You were doing Microsoft Research. Where, on the spectrum from pure, like your Carnegie Mellon roots, to applied, like your DARPA roots, did that land? There’s an organization called NeXT here I think, yeah?

Peter Lee: That’s right. You know, when I was in academia, academia really knows how to do research.

Host: Yeah.

Peter Lee: And they really put the creatives, the graduate students and the faculty, at the top of the pyramid, socially, in the university. It’s just a great setup. And it’s organized into departments, which are each named after a research area or a discipline and within the departments there are groups of people organized by sub-discipline or area, and so it’s an organizing principle that’s tried and true. When I went to DARPA, it was completely different. The departments aren’t organized by research area, they’re organized by mission, some easily assessable goal or objective. You can always answer the question, have we accomplished it yet or not?

Host: Right.

Peter Lee: And so research at DARPA is organized around those missions and that was a big learning experience for me. It’s not like saying we’re going to do computer vision research. We’ll be doing that for the next fifty years. It’s, can we eliminate the language barrier for all internet-connected people? That’s a mission. You can answer the question, you know, how close are we?

Host: Right.

Peter Lee: And so the mix between those two modes of research, from academia to DARPA, is something that I took with me when I joined Microsoft Research and, you know, Microsoft Research has some mix, but I thought the balance could be slightly different. And then, when Satya Nadella became the CEO and Harry Shum took over our division, they challenged me to go bigger on that idea and that’s how NeXT started. NeXT tried to organize itself by missions and it tried to take passionate people and brilliant ideas and grow them into new lines of business, new engineering capabilities for Microsoft, and along the way, create new CVPs and TFs for our company. There’s a tension here because one of the things that’s so important for great research is stability. And so when you organize things like you do in academia, and in large parts of Microsoft Research, you get that stability by having groups of people devoted to an area. We have, for example, say, computer networking research groups that are best in the world.

Host: Right.

Peter Lee: And they’ve been stable for a long time and, you know, they just create more and more knowledge and depth, and that stability is just so important. You feel like you can take big risks when you have that stability. When you are mission-oriented, like in NeXT, these missions are coming and going all the time. So that has to be managed carefully, but the other benefit of that, management-wise, is more people get a chance to step up and express their leadership. So it’s not that either model is superior to the other, but it’s good to have both. And when you’re in a company with all the resources that Microsoft has, we really should have both.

Host: Well, let’s zoom out and talk, somewhat generally, about the promise of AI because that’s where we’re going to land on some of the more specific things we’ll talk about in a bit, but Microsoft has several initiatives under a larger umbrella called AI for Good and the aim is to bring the power of AI to societal-scale problems in things like agriculture, broadband accessibility, education, environment and, of course, medicine. So AI for Health is one of these initiatives, but it’s not the same thing as Microsoft Healthcare, right?

Peter Lee: Well, the whole AI for Good program is so exciting and I’m just so proud to be in a company that makes this kind of commitment. You can think of it as a philanthropic grants program and it is, in fact, in all of these areas, providing funding and technical support to really worthy teams, passionate people, really trying to bring AI to bear for the greater good.

Host: Mm-hmm.

Peter Lee: But it’s also the case that we devote our own research resources to these things. So it’s not just giving out grants, but it’s actually getting into collaborations. What’s interesting about AI for Health is that it’s the first pillar in the AI for Good program that actually overlaps with a business at Microsoft and that’s Microsoft Healthcare. One way that I think about it is, it’s an outlet for researchers to think about, what could AI do to advance medicine? When you talk to a lot of researchers in computer science departments, or across Microsoft research labs, increasingly you’ll see more and more of them getting interested in healthcare and medicine and the first things that they tend to think about, if they’re new to the field, are diagnostic and therapeutic applications. Can we come up with something that will detect ovarian cancer earlier? Can we come up with new imaging techniques that will help radiologists do a better job? Those sorts of diagnostic and therapeutic applications, I think, are incredibly important for the world, but they are not Microsoft businesses. So the AI for Health program can provide an outlet for those types of research passions. And then there are also, as a secondary element, four billion people on this planet today that have no reasonable access to healthcare. AI and technology have to be part of the solution to creating that more equitable access and so that’s another element that, again, doesn’t directly touch Microsoft’s business today in Microsoft Healthcare, but is so important we have a lot to offer so AI for Health is just, I think, an incredibly visionary and wonderful program for that.

Host: Well, let’s zoom back out… um, no, let’s zoom back in. I’ve lost track of the camera. I don’t know where it is! Let’s talk about the idea of precision medicine, or precision healthcare, and the dream of improving those diagnostic and therapeutic interventions with AI. Tell us what precision medicine is and how that plays out and how are the two rather culturally diverse fields of computer science and medicine coming together to solve for X here?

Peter Lee: Yeah, I think one of the things that is sometimes underappreciated is, over the past ten to twenty years, there’s been a massive digitization of healthcare and medicine. After the 2008 economic collapse, in 2009, there was the ARA… there was a piece of legislation attached to that called the HITECH Act, and HITECH actually required healthcare organizations to digitize health records. And so for the past ten years, we’ve gone from something like 15% of health records being in digital form, to today, now over 98% of health records are in digital form. And along with that, medical devices that measure you have gone digital, our ability to sequence and analyze your genome, your proteome, have gone digital and now the question is, what can we do with all the digital information? And on top of that, we have social information.

Host: Yeah.

Peter Lee: People are carrying mobile devices, people talk to computers at home, people go to their Walgreens to get their flu shots.

Host: Yeah.

Peter Lee: And all of this is in digital form and so the question is, can we take all of that digital data and use it to provide highly personalized and precisely targeted diagnostics and therapeutics to people.

Host: Mm-hmm.

Peter Lee: Can we get a holistic, kind of, 360-degree view of you, specifically, of what’s going on with you right now, and what might go on over the next several years, and target your wellness? Can we advance from sick care, which is really what we have today…

Host: Right.

Peter Lee: …to healthcare.

Host: When a big tech company like Microsoft throws its hat in the healthcare ring and publicly says that it has the goal of “transforming how healthcare is experienced and delivered,” I immediately think of the word disruption, but you’ve said healthcare isn’t something you disrupt. What do you mean by that, and if disruption isn’t the goal, what is?

Peter Lee: Right. You know, healthcare is not a normal business. Worldwide, it’s actually a $7.5 trillion dollar business. And for Microsoft, it’s incredibly important because, as we were discussing, it’s gone digital, and increasingly, that digital data, and the services and AI and computation to make good use of the data, is moving to the cloud. So it has to be something that we pay very close attention to and we have a business priority to support that.

Host: Right.

Peter Lee: But, you know, it’s not a normal business in many, many different senses. As a patient, people don’t shop, at least not on price, for their healthcare. They might go on a website to look at ratings of primary care physicians, but certainly, if you’re in a car accident, you’re unconscious. You’re not shopping.

Host: No.

Peter Lee: You’re just looking for the best possible care. And similarly, there’s a massive shift for healthcare providers away from what’s called fee-for-service, and toward something called value-based care where doctors and clinics are being reimbursed based on the quality of the outcomes. What you’re trying to do is create success for those people and organizations that, let’s face it, they’ve devoted their lives to helping people be healthier. And so it really is almost the purest expression of Microsoft’s mission of empowerment. It’s not, how do we create a disruption that allows us to make more money, but instead, you know, how do we empower people and organizations to deliver better – and receive better – healthcare? Today in the US, a primary care doctor spends almost twice as much time entering clinical documentation as they do actually taking care of patients. Some of the doctors we work with here at Microsoft call this “pajama time,” because you spend your day working with patients and then, at home, when you crawl into bed, you have to finish up your documentation. That’s a big source of burn out.

Host: Oh, yeah.

Peter Lee: And so, what can we do, using speech recognition technologies, natural language processing, diarization, to enable that clinical note-taking to be dramatically reduced? You know, how would that help doctors pay more attention to their patients? There is something called revenue-cycle management, and it’s sort of sometimes viewed as a kind of evil way to maximize revenues in a clinic or hospital system, but it is also a place where you can really try to eliminate waste. Today, in the US market, most estimates say that about a trillion dollars every year is just gone to waste in the US healthcare system. And so these are sort of data analysis problems, in this highly complex system, that really require the kind of AI and machine learning that we develop.

Host: And those are the kinds of disruptions we’d like to see, right?

Peter Lee: That’s right. Yeah.

Host: We’ll call them successes, as you did.

Peter Lee: Well, and they are disruptions though, they’re disruptions that help today’s working doctors and nurses. They help today’s hospital administrators.

(music plays)

Host: Let’s talk about several innovations that you’ve actually made to help support the healthcare industry’s transformation. Last year – year ago – at the HIMSS conference, you talked about tools that would improve communication, the healthcare experience and interoperability and data sharing in the cloud. Tell us about these innovations. What did you envision then, and now, a year later, how are they working out?

Peter Lee: Yeah. Maybe the one I like to start with is about interoperability. I sometimes have joked that it’s the least sexy topic, but it’s the one that is, I think, the most important to us. In tech industry terms, you know, if the last decade was about digitizing healthcare, the next decade is about making all that digital data good for something and that good for something is going to depend on data flowing where it needs to flow…

Host: Right.

Peter Lee: …at the right time. And doing that in a way that protects people’s privacy because health data is very, very personal. And so a fundamental issue there is interoperability. Today, while we have all this digital data, it’s really locked into thousands of different incompatible data formats. It doesn’t get exposed through modern APIs or microservices. It’s oftentimes siloed for business reasons, and so unlocking that is important. One way that we look at it here at Microsoft is, we are seeing a rising tidal wave of healthcare organizations starting to move to the cloud. Probably ten years from now, almost all healthcare organizations will be in the cloud. And so, with that historic shift that will happen only once, ever, in human history, what can we do today to ensure that we end up in a better place ten years from now than we are now? And interoperability is one of the keys there. And that’s something that’s been recognized by multiple governments. The US government, through the Centers for Medicare and Medicaid Services, has proposed new regulations that require the use of specific interoperable data standards and API frameworks. And I’m very proud that Microsoft has participated in helping endorse and guide the specific technical choices in those new rules.

Host: So what is the API that Microsoft has?

Peter Lee: So the data standard that we’ve put a lot of effort behind is something called FHIR. F-H-I-R, Fast Healthcare Interoperability Resources. And for anyone that’s used to working in the web, you can look at FHIR and you’ll see something very familiar. It’s a modern data standard, it’s extensible, because medical science is advancing all the time, and it’s highly susceptible to analysis through machine learning.

Host: Okay.

Peter Lee: And so it’s utterly modern and standardized, and I think FHIR can be a lingua franca for all healthcare data everywhere. And so, for Microsoft, we’ve integrated FHIR as a first-class data type in our cloud, in Azure.

Host: Oh, okay.

Peter Lee: We’ve enabled FHIR in Office. So the Teams application, for example, it can connect to health data for doctors and nurses. And there’s integration going on into Dynamics. And so it’s a way to convert everything that we do here at Microsoft into great healthcare-capable tools. And once you have FHIR in the cloud, then you also, suddenly, unlock all of the AI tools that we have to just enable all that precision medicine down the line.

Host: That’s such a Biblical reference right then! The cloud and the FHIR.

Peter Lee: You know, there are – there’s an endless supply of bad puns around FHIR. So thank you for contributing to that.

Host: Well, it makes me think about the Fyre Festival, which was spelt F-Y-R-E, which was just the biggest debacle in festival history

Peter Lee: I should say, by the way, another thing that everyone connected to Microsoft should be proud of is, we have really been one of the chief architects for this new future. One of the most important people in the FHIR development community is Josh Mandel, who works with us here at Microsoft Healthcare, and he has the title Chief Architect, but it’s not Chief Architect for Microsoft, it’s Chief Architect for the cloud.

Host: Oh, my gosh.

Peter Lee: So he spends time talking to the folks at Google, at AWS, at Salesforce and so on.

Host: Right.

Peter Lee: Because we’re trying to bring the entire cloud ecosystem along to this new future.

Host: Tell me a little bit about what role bots might play in this arena?

Peter Lee: Bots are really interesting because, how many listeners have received a lab test result and have no idea what it means? How many people have received some weird piece of paper or bill in the mail from their insurance company? It’s not just medical advice, you know, where you have a scratch in your throat and you’re worried about what you should do. That’s important too, but the idea of bots in healthcare really span all these other things. One of the most touching, in a project led by Hadas Bitran and her team, has been in the area of clinical trials. So there’s a website called clinicaltrials.gov and it contains a registry describing every registered clinical trial going on. So now, if you are desperate for more experimental care, or you’re a doctor treating someone and you’re desperate for this, you know, how do you find, out of thousands of documents, and they’re complicated…

Host: Right.

Peter Lee: …technical, medical, science things.

Host: Jargon-y.

Peter Lee: Yeah, and it’s difficult. If you go to clinicaltrials.gov and type into the search box ‘breast cancer’ you get hundreds of results. So the cool project that Hadas and her team led was to use machine reading from Microsoft Research out of Hoifung Poon’s team, to read all of those clinical trials documents and create a knowledge graph and use that knowledge graph then to drive a conversational chatbot so that you can engage in a conversation. So you can say, you know, “I have breast cancer. I’m looking for a clinical trial,” and the chatbot will start to ask you questions in order to narrow down, eventually, to the one or two or three clinical trials that might be just right for you. And so this is something that we just think has a lot of potential.

Host: Yeah.

Peter Lee: And business-wise, there are more mundane, but also important things. Just call centers. Boy, those nurses are busy. What would happen if we had a bot that would triage and tee up some of those things and really give superpowers to those call center nurses. And so it’s that type of thing that I think is very exciting about conversational tech in general. And of course, Microsoft Research and NeXT should be really proud of really pioneering a lot of this bot technology.

Host: Right. So if I employed a bot to narrow down the clinical trials, could I get myself into one? Is that what you’re explaining here?

Peter Lee: Yeah, in fact, the idea here is that this would help, tremendously, the connection between perspective patients and clinical trials. It’s so important because pharmaceutical companies, in clinics that are setting up clinical trials, more than 50% of them fail to recruit enough participants. They just never get off the ground because they don’t get enough. The recruitment problem is so difficult.

Host: Wow.

Peter Lee: And so this is something that can really help on both ends.

Host: I didn’t even think about it from the other angle. Like, getting people in. I always just assumed, well, a clinical trial, no biggie.

Peter Lee: It’s such a sad thing that most clinical trials fail. And fail because of the recruitment problem.

Host: Huh. Well, let’s talk a little bit more about some of the really interesting projects that are going on across the labs here at Microsoft Research. So what are some of the projects and who are some of the people that are working to improve healthcare in technology research?

Peter Lee: Yeah. I think pretty much every MSR lab is doing interesting things. There’s some wonderful work going on in the Cambridge UK lab, in Chris Bishop’s lab there, in a group being led by Aditya Nori. One of the things there has been a set of projects in collaboration with Novartis really looking at new ideas about AI-powered molecule design for cellular therapies, as well as very precise dosing of therapies for things like macular degeneration and so these are, sort of, bringing the very best machine learning and AI researchers shoulder-to-shoulder with the best researchers and scientists at Novartis to really kind of innovate and invent the future. In the MSR India lab, Sriram Rajamani’s team, they’ve been standing up a really impressive set of technologies and projects that have to do with global access to healthcare and this is something that I think is just incredibly, incredibly important. You know, we really could enable, through more intelligent medical devices for example, much less well-trained technicians and clinicians to be able to deliver healthcare at a distance. The other thing that is very exciting to me there is just looking at data. You know, how do we normalize data from lots of different sources?

Host: Right.

Peter Lee: And then MSR Asia in Beijing, they’ve increasingly been redirecting some of the amazing advances that that lab is famous for in computer vision to the medical imaging space. And there are just amazing possibilities in taking images that might not be high resolution enough for a precise diagnosis and using AI to, kind of, magically improve the resolution. And so just across board, you go from, kind of, lab to lab you just see some really inspiring work going on.

Host: Yeah, some of the researchers have been on the podcast. Antonio Criminisi with InnerEye, umm…  haven’t had Ethan Jackson from Premonition yet

Peter Lee: No, Premonition… Well, Antonio Criminisi and the work that he led on InnerEye, you know, we actually went all the way to an FDA 510(k) approval on the tumor segmentations…

Host: Wow.

Peter Lee: …and the components of that now are going into our cloud. Really amazing stuff.

Host: Yeah.

Peter Lee: And then Premonition, this is one of these things that is, in the age of coronavirus…

Host: Right?

Peter Lee: …is very topical.

Host: I was just going to refer to that, but I thought maybe I shouldn’t…

Peter Lee: The thing that is so important is, we talked of precision medicine before…

Host: Yeah.

Peter Lee: …but there is also an emerging science of precision population health. And in fact, the National Academy of Medicine just recently codified that as an official part of medical research and it’s bringing some of the same sort of precision medicine ideas, but to population health applications and studies. And so when you look at Premonition, and the ability to look at a whole community and get a genetically precise diagnosis of what is going on in that community, it is something that could really be a game-changer, especially in an era where we are seeing more challenging infectious disease outbreaks.

Host: I think a lot of people would say, can we speed that one up a little? I want you to talk for a minute about the broader tech and healthcare ecosystem and what it takes to be a leader, both thought and otherwise, in the field. So you’ve noted that we’re in the middle of a big transformation that’s only going to happen once in history and because of that, you have a question that you ask yourself and everyone who reports to you. So what’s the question that you ask, and how does the answer impact Microsoft’s position as a leader?

Peter Lee: Right. You know, healthcare, in most parts of the world, is really facing some big challenges. It’s at a financial breaking point in almost all developed countries. The spread of the latest access to good medical practice has been slowing in the developing world and as you, kind of, look at, you know, how to break out of these cycles, increasingly, people turn to technology. And the kind of shining beacon of hope is this mountain of digital data that’s being produced every single day and so how can we convert that into what’s called the triple aim of better outcomes, lower costs and better experiences? So then, when you come to Microsoft, you have to wonder, well, if we’re going to try to make a contribution, how do you do it? When Satya Nadella asked us to take this on, we told ourselves a joke that he was throwing us into the middle of the Pacific Ocean and asking us to find land, because it’s such a big complex space, you know, where do you go? And, we had more jokes about this because you start swimming for a while and you start meeting lots of other people who are just as lost and you actually feel a little ashamed to feel good about seeing other people drowning. But it fundamentally it doesn’t help you to figure out what to work on, and so we started to ask ourselves the question, if Microsoft were to disappear today, in what ways would healthcare be harmed or held back tomorrow and into the future? If our hyperscale cloud were to disappear today, in what ways would that matter to healthcare? If all of the AI capabilities that we can deploy so cheaply on that cloud were to disappear, how would that matter? And then, since we’re coming out of Microsoft Research, if Microsoft Research were to disappear today, in what ways would that matter? And asking ourselves that question has sort of helped us focus on the areas where we think we have a right to play. And I think the wonderful thing about Microsoft today is, we have a business model that makes it easy to align those things to our business priorities. And so it’s really a special time right now.

(music plays)

Host: Well, this is – not to change tone really quickly – but this is the part of the podcast where I ask what could possibly go wrong? And since we’ve actually just used a drowning in the sea metaphor, it’s probably apropos… but when you bring nascent AI technologies, and I say nascent because most people have said, even though it’s been going on for a long time, we’re still in an infancy phase of these technologies. When you bring that to healthcare, and you’re literally dealing with lifeanddeath consequences, there’s not any margin for error. So… I realize that the answer to this question could be too long for the podcast, but I have to ask, what keeps you up at night, and how are you and your colleagues addressing potential negative consequences at the outset rather than waiting for the problems to appear downstream?

Peter Lee: That’s such an important question and it actually has multiple answers. Maybe the one that I think would be most obvious to the listeners of this podcast has to do with patient safety. Medical practice and medical science has really advanced on the idea of prospective studies and clinical validation, but that’s not how computer science, broadly speaking, works. In fact, when we’re talking about machine learning it’s really based on retrospective studies. You know, we take data that was generated in the past and we try to extract a model through machine learning from it. And what the world has learned, in the last few years, is that those retrospective studies don’t necessarily hold up very well, prospectively. And so that gap is very dangerous. It can lead to new therapies and diagnoses that go wrong in unpredictable ways, and there’s sort of an over-exuberance on both sides. As technologists, we’re pretty confident about what we do and we see lots of problems that we can solve, and the healthcare community is sometimes dazzled by all of the magical machine learning we do and so there can be over-confidence on both sides. That’s one thing that I worry about a lot because, you know, all over our field, not just all over Microsoft, but across all the other major tech companies and universities, there are just great technologists that are doing some wonderful things and are very well-intentioned, but aren’t necessarily validated in the right way. And so that’s something that, really, is worrisome. Going along with safety is privacy of people’s health data. And while I think most people would be glad to donate their health data for scientific progress, no one wants to be exploited. Exploited for money, or worse, you know, denied, for example, insurance.

Host: Right.

Peter Lee: And you know, these two things can really lead to outcomes, over the next decade, that could really damage our ability to make good progress in the future.

Host: So that said, we’re pretty good at identifying the problem. We may be able to start a good conversation, air quotes, on that, but this is, for me, like, what are you doing?

Peter Lee: Yeah.

Host: Because this is a huge thing, and

Peter Lee: I really think, for real progress and real transformation, that the foundations have to be right and those foundations do start with this idea of interoperability. So the good thing is that major governments, including the US government, are seeing this and they are making very definitive moves to foster this interoperable future. And so now, our role in that is to provide the technical guidance and technologies so that that’s done in the right way. And so everything that we at Microsoft are doing around interoperability, around security, around identity management, differential privacy, all of the work that came out of Microsoft Research in confidential computing…

Host: Yeah.

Peter Lee: …all of those things are likely to be part of this future. As important as confidential computing has been as a product of Microsoft Research, it’s going to be way, way more important in this healthcare future. And so it’s really up to us to make sure that regulators and lawmakers and clinicians are aware and smart about these things. And we can provide that technical guidance.

Host: What about the other companies that you mentioned? I mean, you’re not in this alone and it’s not just companies, it’s nations, and, I dare say, rogue actors, that are skilled in this arena. How do you get, sort of, agreement and compliance?

Peter Lee: I would say that Microsoft is in a good position because it has a clear business model. If someone is asking us, well what are you going to with our data? We have a very clear business model that says that we don’t monetize on your data.

Host: Right.

Peter Lee: But everyone is going to have to figure that out. Also, when you are getting into a new area like healthcare, every tech company is a big, complicated place with lots of stakeholders, lots of competing internal interests, lots of politics.

Host: Right.

Peter Lee: And so Microsoft, I think, is in a very good position that way too. We’re all operating as one Microsoft. But it’s so important that we all find ways to work together. One point of contact has been engineered by the White House in something called the Blue Button Developers Conference. So that’s where I’m literally holding hands with my counterparts at Google, at Salesforce, at Amazon, at IBM, making certain pledges there. And so the convening power of governments is pretty powerful.

Host: It’s story time. We’ve talked a little about your academic and professional life. Give us a short personal history. Where did it all start for Peter Lee and how did he end up where he is today?

Peter Lee: Oh, my.

Host: Has to be short.

Peter Lee: Well, let’s see, so uh, I’m Korean by heritage. I was born in Ohio, but Korean by heritage and my parents immigrated from Korea. My dad was a physics professor. He’s long retired now and my mother a chemistry professor.

Host: Wow.

Peter Lee: And she passed away some years ago. But I guess as an Asian kid growing up in a physical science household, I was destined to become a scientist myself. And in fact, they never said it out loud, but I think it was a disappointment to them when I went to college to study math! And then maybe an even the bigger disappointment when I went from math to computer science in grad school. Of course they’re very proud of me now.

Host: Of course! Where’d you go to school?

Peter Lee: I went to the University of Michigan. I was there as an undergrad and then I was planning to go work after that. I actually interviewed at a little, tiny company in the Pacific Northwest called Microsoft…

Host: Back then!

Peter Lee: … and …but I was wooed by my senior research advisor at Michigan to stay on for my PhD and so I stayed and then went from grad school right to Carnegie Mellon University as a professor.

Host: And then worked your way up to leading the department…

Peter Lee: Yeah. So I was there for twenty four years. They were wonderful years. Carnegie Mellon University is just a wonderful, wonderful place. And um..

Host: It’s almost like there’s a pipeline from Microsoft Research to Carnegie Mellon. Everyone is CMU this, CMU that!

Peter Lee: Well, I remember, as an assistant professor, when Rick Rashid came to my office to tell me that he was leaving to start this thing called Microsoft Research and I was really sad and shocked by that. Now here I am!

Host: Right. Well, tell us, um, if you can, one interesting thing about you that people might not know.

Peter Lee: I don’t know if people know this or not, but I have always had an interest in cars, in fast cars. I spent some time, when I was young, racing in something called shifter karts and then later in open wheel Formula Ford, and then, when I got my first real job at Carnegie Mellon, I had enough money that I spent quite a bit of it trying to get a sponsored ride with a semi-pro team. I never managed to make it. It’s hard to kind of split being an assistant professor and trying to follow that passion. You know, I don’t do that too much anymore. Once you are married and have a child, the annoyance factor gets a little high, but it’s something that I still really love and there’s a community of people, of course, at a place like Microsoft, that’s really passionate about cars as well.

Host: As we close, Peter, I’d like you to leave our listeners with some parting advice. Many of them are computer science people who may want to apply their skills in the world of healthcare, but are not sure how to get there from here. Where, in the vast sea of technology and healthcare research possibilities, should emerging researchers set their sights and where should they begin their swim?

Peter Lee: You know, I think it’s all about data and how to make something good out of data. And today, especially, you know, we are in that big sea of data silos. Every one of them has different formats, different rules, most of them don’t have modern APIs. And so things that can help evolve that system to a true ocean of data, I think anything to that extent will be great. And it is not just tinkering around with interfaces. It’s actually AI. To, say, normalize the schemas of two different data sets, intelligently, is something that we will need to do using the, kind of, latest machine learning, latest program synthesis, the kind of, latest data science techniques that we have on offer.

Host: Who do you want on your team in the coming years?

Peter Lee: The thing that I think I find so exciting about great researchers today is their intellectual flexibility to start looking at an idea and getting more and more depth of understanding, but then evolve as a person to understanding, you know, what is the value of this in the world, and understanding that that is a competitive world. And so, how willing are you to compete in that competitive marketplace to make the best stuff? And that evolution that we are seeing over and over again with people out of Microsoft Research is just incredibly exciting. When you see someone like a Galen Hunt or a Doug Burger or a Lili Cheng come out of Microsoft Research and then evolve into these world leaders in their respective fields, not just in research, but spanning research to really competing in a highly competitive marketplace, that is the future.

Host: Peter Lee, thank you for joining us on the podcast today. It’s been an absolute delight.

Peter Lee: Thank you for having me. It’s been fun.

(music plays)

To learn more about Dr. Peter Lee and how Microsoft is working to empower healthcare professionals around the world, visit Microsoft.com/research

Go to Original Article
Author: Microsoft News Center

Everything We Announced at X019 – Xbox Wire

X019 kicked off with the biggest episode of Inside Xbox ever, celebrating all things Xbox with an incredible slate of news, including over ten Xbox Game Studios games on stage, three brand new games revealed from Xbox Game Studios, and four more world premiere titles from developers around the world.

Inside Xbox was also home to big news from Xbox Game Pass (with 21 titles from developers in the [email protected] program coming day one with Xbox Game Pass) and Project xCloud, as well as for revealing great new Black Friday deals for the holiday season. Read on for the full recap news from the show.

Obsidian Tackles the Survival Genre with Grounded

During the show, Obsidian debuted a trailer introducing Grounded – a brand-new Xbox Game Preview title launching in Spring 2020 with Xbox Game Pass. Studio Design Director Josh Sawyer and Senior Programmer Roby Atadero joined us to share the inspiration behind Grounded’s development before Obsidian became part of Xbox Game Studios. We also received insight into how Obsidian is injecting narrative and RPG elements into the survival genre with this title. Obsidian closed by highlighting the importance of community involvement for the on-going development of Grounded. Grounded launches on Xbox One digitally through Xbox Game Preview and with Xbox Game Pass, the Microsoft Store, and on Steam this upcoming spring. For full details, check out our Grounded announcement post.

Project xCloud Coming to Windows 10 PCs and New Markets

We announced that more than 50 new titles from over 25 of our valued partners will join the Project xCloud public preview, such as Madden NFL 20, Devil May Cry 5, and Tekken 7. In 2020, we’ll bring Project xCloud to Windows 10 PCs, and we’re collaborating with a broad set of partners to make game streaming available on other devices as well. We’re also expanding support to more Bluetooth controllers, including the DualShock 4 wireless controller and game pads from Razer. Moreover, in 2020, we are expanding the Project xCloud preview to new markets, will enable gamers to stream Xbox games that they already own or will purchase, and will add game streaming from the cloud to Xbox Game Pass. For all of our Project xCloud announcements, take a look at our X019 Project xCloud post.

Rare announces its next New IP Everwild

Executive producer, Louise O’Connor, appeared on Inside Xbox and shared a glimpse of Everwild, the next new IP from Rare. The team are passionate about creating something special and this first reveal is just a taste of the magical, natural world the team at Rare are bringing to life. There will be more to share at a later date, for now to watch the trailer again, head over to www.xbox.com/games/everwild or take a look at our full Everwild announcement.

The Seabound Soul and Fire comes to Sea of Thieves

On Inside Xbox, Sea of Thieves debuted details for the game’s latest FREE monthly update arriving on Nov. 20 “The Seabound Soul,” This update features an all-new, lore-focused Tall Tale quest for players looking to play their part in a thrilling story. Join Captain Pendragon to uncover the mystery of the feared ship, the Ashen Dragon, and reveal the secrets of a sinister new threat. This update also brings the heat to Sea of Thieves with the introduction of firebombs! This explosive ammo can be thrown or loaded into cannons to put a fiery panic in the hearts of opposing crews in both Adventure and in The Arena. To find out more go to www.seaofthieves.com.

Save Big on Xbox Consoles, Games, and More this Black Friday

The stage is set for big savings this holiday season. From the hallowed halls of X019 came the reveal of Black Friday deals on Xbox hardware: the Xbox One X, Xbox One S, and Xbox One S All Digital Edition consoles are all up to $150 off! You can also get three months of Xbox Game Pass Ultimate for just $1 and get Xbox Game Studios titles for up to 50% off. Finally, select Xbox Wireless Controllers are up to $20 off and we’re offering $10 off Xbox Design Lab Controllers. For complete details, head on over to our Black Friday announcement post.

Xbox Game Pass at X019: Announcing New Games and the Ultimate Holiday Offer

On today’s Inside Xbox, we unveiled over 50 games coming to Xbox Game Pass. Read more here for complete details on these titles and their availability with Xbox Game Pass. We also revealed the Xbox Game Pass holiday offer: starting November 14, get 3 months of Xbox Game Pass Ultimate for just $1. And for the first time, eligible Xbox Game Pass Ultimate members in select markets will also get 1 month of EA Access, 3 months of Discord Nitro, and 6 months of Spotify Premium as part of their member benefits. For all of our Xbox Game Pass news, take a look at our X019 Xbox Game Pass post.

Peeling Back the Layers of Bleeding Edge

On Inside Xbox, a new trailer for Bleeding Edge was revealed by Creative Director Rahni Tucker, along with the March 24, 2020 launch date and pre-order. Pre-order incentives include Closed Beta access on February 14, 2020 and Punk Pack bonus in-game cosmetics. Bleeding Edge will be available with Xbox Game Pass. For complete details, check out our X019 Bleeding Edge blog post.

DONTNOD Narrative in Tell Me Why

On Inside Xbox, Xbox Game Studios revealed Tell Me Why, an exclusive new narrative adventure from DONTNOD Entertainment, the studio behind the beloved Life is Strange franchise. All of Tell Me Why’s gripping chapters will release Summer, 2020. For more information please visit www.tellmewhygame.com and take a look at our Tell Me Why announcement post.

Halo: Reach Coming to Halo: The Master Chief Collection on December 3

On Inside Xbox, 343 Industries Community Director Brian Jarrard appeared onstage to announce that Halo: Reach lands in Halo: The Master Chief Collection on December 3. Along with sharing a stunning, all-new new launch trailer, Brian announced that all six titles in Halo: The Master Chief Collection can be purchased up-front on PC for $39.99 with Halo: Reach available for individual purchase for $9.99. Halo: The Master Chief Collection isavailable with Xbox Game Pass. On console, players can get the updated Halo: The Master Chief Collection bundle, which includes the Halo 3: ODST campaign and Halo: Reach, for the same price. What’s more, Brian announced that players can get started today by pre-installing on Xbox Game Pass or pre-ordering on either the Microsoft Store or Steam. To find out more, go to xbox.com/halo.  

The Sky’s the Limit in New Footage from Microsoft Flight Simulator

Today, we shared an all-new trailer for the upcoming Microsoft Flight Simulator with new footage of the realistic world, authentic aircraft and real-time dynamic weather in this next generation simulator being developed in partnership with Asobo Studios. We’ve also just announced our first wave of aircraft manufacturing partners! For more details and to keep up with all the latest news on Microsoft Flight Simulator, visit https://www.flightsimulator.com or head on over to our Flight Simulator partnership announcement post.

The Future of Strategy is Revealed in the First Gameplay Trailer for Age of Empires IV

Today, the newly named studio World’s Edge revealed the first ever gameplay trailer for their highly anticipated upcoming game, Age of Empires IV, which included a special visit from studio head Shannon Loftis for an in-depth discussion about the future of the dedicated franchise. Shannon also introduced a brand-new launch trailer to celebrate the release of Age of Empires II: Definitive Edition, which includes remastered 4K graphics and a new expansion, The Last Khans. You can play Age of Empires II: Definitive Edition starting today on Xbox Game Pass, Windows 10, and Steam. Want to learn more? Take a look at our X019 Age of Empires update post.

Taking a Deeper Look at Minecraft Dungeons

Minecraft players were pumped to see Minecraft Dungeons Executive Producer David Nisshagen live on our stage to show us the all-new game. He talked through some key facets of never-before-seen gameplay as Game Director Måns Olson and Senior Producer Nathan Rose did a live demo of this new and exciting experience!

Minecraft Earth Celebrates Early Access with Pop Up Events

In celebration of the game’s continued early access rollout, we unveiled “Mobs at the Park” at X019 – the one-of-a-kind, life-sized statues of interactive mobs popping up in New York City, London and Sydney starting on Saturday, November 16. The statues are life-sized creations of the Muddy Pig, Moobloom and the all-new, amazingly-festive Jolly Llama which will feature a scannable QR code to play an exclusive new adventure built by the Minecraft Earth development team and allow players to receive exclusive access to the Jolly Llama in-game before it’s globally available in December. The pop-ups will be live in three locations around the world – Hudson Yards in New York City, the Queens Walk in London and Campbell’s Cove in Sydney – during the weekends of November 16-17, November 23-24 and November 30-December 1, from 10am-7pm local time. Everyone can stay tuned to Minecraft.net or @MinecraftEarth for more information.

Explore Wasteland 3’s Post-Apocalyptic World in May 2020

inXile Studio Head Brian Fargo appeared on Inside Xbox to debut an all-new trailer for Wasteland 3, showing us some incredible new gameplay footage and unveiling the release date of May 19, 2020. On stage, Fargo explained the origins of this 80s-inspired post-apocalyptic world, as well as the brand-new story that will take place in the frozen wastes of Colorado. He also shared how the partnership with Xbox has helped to allow the team to make Wasteland 3 the biggest and best game of the franchise, demonstrated by their recent award of best RPG at Gamescom. You can pre-order Wasteland 3 today on Xbox One or Windows 10. Wasteland 3 will also be available on console and PC on day one with Xbox Game Pass.

Get Blown Away by the First Glimpse of CrossfireX Gameplay

Inside Xbox host @BennyCentral took us on a global journey from Seoul to Finland to Seattle to meet the teams working on CrossfireX, followed by the debut of a gameplay teaser trailer that gives fans just a taste of this massive FPS coming exclusively to Xbox in 2020. Stay tuned to xbox.com/CrossfireX for all the latest game news.

Catch the Drift in KartRider: Drift on Xbox One

If you’re looking for the ultimate challenge, DS Choi from the KartRider: Drift team unveiled what you’re looking for with their latestgame. Pro racers and league champions Hojun Moon and Insoo Park demonstrated this racing experience by taking us through outrageous conditions at breakneck speeds with customizations to boot, proving this game will challenge even the most road tested when it comes to Xbox One.

Final Fantasy Series Joins Xbox Game Pass and Kingdom Hearts Classics Coming to Xbox One

Square Enix Developers Shinji Hashimoto and Ichiro Hazama appeared on Inside Xbox to announce that the Final Fantasy series on Xbox including Final Fantasy VII, Final Fantasy VIII Remastered, Final Fantasy IX, Final Fantasy X/X-2 HD Remaster, Final Fantasy XII The Zodiac Age, Final Fantasy XIII, Final Fantasy XIII-2, Lightning Returns: Final Fantasy XIII, and Final Fantasy XV will be coming to Xbox Game Pass on Console and PC starting in 2020. Also, Kingdom Hearts HD 1.5+2.5 Remix and Kingdom Hearts HD 2.8 Final Chapter Prologue will come to Xbox One in 2020. Finally, a new Kingdom Hearts 3 demo is available starting today, November 14 on Xbox One – click here to download it now.

We Feel the Force with an Exclusive Look at Star Wars Jedi: Fallen Order

Earlier today, Electronic Arts and Xbox debuted the co-created Become a Jedi video to kick off the X019 broadcast of Inside Xbox, as an invitation for all gamers to Jump In and indulge your ultimate Jedi fantasy. Following, Respawn Game Director Stig Asmussen provided an exclusive look at new gameplay leading into the game’s launch on November 15.

Enter the Criminal Underworld of the Critically Acclaimed Yakuza Series on Xbox One

Today on Inside Xbox, Sega announced that Yakuza is coming soon to Xbox Game Pass. In a new trailer debuted today on the show, Sega and Ryu Ga Gotoku Studio’s cult classics Yakuza 0, Yakuza Kiwami 1, and Yakuza Kiwami 2 were revealed to be coming to Xbox for the first time. If you want to learn more, head over to our X019 Yakuza announcement post.

Strap in for a New Look at Planet Coaster on Xbox One

Thrills, chills, and popcorn spills! Planet Coaster showcased its latest gameplay trailer for the fans at X019.

Play Around in Purgatory in West of Dead

Developer Raw Fury announced West of Dead, which lets you step into the boots of the dead man William Mason (voiced by Ron Perlman) and descend into the grim and gritty world of Purgatory in this fast-paced twin-stick shooter that’ll put your skills to test. Dodge behind cover as you try to outgun your enemies in the unknown procedurally generated hunting grounds. The Wild West has never been this dark. Sign up for the Open Beta today at westofdead.com.

Build a Village in a Twisted Mirrored Universe in Drake Hollow

The Molasses Flood announced their next game, Drake Hollow, a cooperative action village building game set in the Hollow – a blighted mirror of our world – in which you build and defend villages of Drakes, the local vegetable folk. Either solo or with friends, you explore a procedurally generated world of islands trapped in poisonous aether. You gather supplies, build networks to bring them back to your camp, find and rescue Drakes in the wilderness, raise them, and defend them from attacks by a menagerie of feral beasts. You need to balance your time carefully: what supplies are most crucial at any given moment, what does your village need: more resources to keep the Drakes thriving or more defenses to help protect them while you explore.

Experience a Supernatural Crisis in the Upcoming Anthology Series Last Stop

Last Stop is the new game from Variable State, creators of the award-winning Virginia, and will be published by Annapurna Interactive. The game is a single-player third-person adventure set in present-day London, where you play as three separate characters whose worlds collide during a supernatural crisis. Last Stop is an anthology connecting three stories in one, centered on secret lives, the ties that bind, and how magic can be found in the mundane.

Embark on a Multidimensional Journey in The Artful Escape

A fresh, new peek of The Artful Escape was also unveiled during Inside Xbox. Developed by Beethoven & Dinosaur and published by Annapurna Interactive, The Artful Escape tells the story of Francis Vendetti as he embarks on a multidimensional journey to create his stage persona. The Artful Escape is an action-adventure, narrative driven, musical-laser-light-battle kind of videogame with dazzling visuals, killer music, and a soulful message about what it means to find your true self.

We hope you enjoyed the show, and we’ll see you next time!

Go to Original Article
Author: Microsoft News Center

Machine reading comprehension with Dr. T.J. Hazen

Dr. TJ Hazen

Episode 86, August 21, 2019

The ability to read and understand unstructured text, and then answer questions about it, is a common skill among literate humans. But for machines? Not so much. At least not yet! And not if Dr. T.J. Hazen, Senior Principal Research Manager in the Engineering and Applied Research group at MSR Montreal, has a say. He’s spent much of his career working on machine speech and language understanding, and particularly, of late, machine reading comprehension, or MRC.

On today’s podcast, Dr. Hazen talks about why reading comprehension is so hard for machines, gives us an inside look at the technical approaches applied researchers and their engineering colleagues are using to tackle the problem, and shares the story of how an a-ha moment with a Rubik’s Cube inspired a career in computer science and a quest to teach computers to answer complex, text-based questions in the real world.

Related:


Transcript

T.J. Hazen: Most of the questions are fact-based questions like, who did something, or when did something happen? And most of the answers are fairly easy to find. So, you know, doing as well as a human on a task is fantastic, but it only gets you part of the way there. What happened is, after this was announced that Microsoft had this great achievement in machine reading comprehension, lots of customers started coming to Microsoft saying, how can we have that for our company? And this is where we’re focused right now. How can we make this technology work for real problems that our enterprise customers are bringing in?

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: The ability to read and understand unstructured text, and then answer questions about it, is a common skill among literate humans. But for machines? Not so much. At least not yet! And not if Dr. T.J. Hazen, Senior Principal Research Manager in the Engineering and Applied Research group at MSR Montreal, has a say. He’s spent much of his career working on machine speech and language understanding, and particularly, of late, machine reading comprehension, or MRC.

On today’s podcast, Dr. Hazen talks about why reading comprehension is so hard for machines, gives us an inside look at the technical approaches applied researchers and their engineering colleagues are using to tackle the problem, and shares the story of how an a-ha moment with a Rubik’s Cube inspired a career in computer science and a quest to teach computers to answer complex, text-based questions in the real world. That and much more on this episode of the Microsoft Research Podcast.

(music plays)

Host: T.J. Hazen, welcome to the podcast!

T.J. Hazen: Thanks for having me.

Host: Researchers like to situate their research, and I like to situate my researchers so let’s get you situated. You are a Senior Principal Research Manager in the Engineering and Applied Research group at Microsoft Research in Montreal. Tell us what you do there. What are the big questions you’re asking, what are the big problems you’re trying to solve, what gets you up in the morning?

T.J. Hazen: Well, I’ve spent my whole career working in speech and language understanding, and I think the primary goal of everything I do is to try to be able to answer questions. So, people have questions and we’d like the computer to be able to provide answers. So that’s sort of the high-level goal, how do we go about answering questions? Now, answers can come from many places.

Host: Right.

T.J. Hazen: A lot of the systems that you’re probably aware of like Siri for example, or Cortana or Bing or Google, any of them…

Host: Right.

T.J. Hazen: …the answers typically come from structured places, databases that contain information, and for years these models have been built in a very domain-specific way. If you want to know the weather, somebody built a system to tell you about the weather.

Host: Right.

T.J. Hazen: And somebody else might build a system to tell you about the age of your favorite celebrity and somebody else might have written a system to tell you about the sports scores, and each of them can be built to handle that very specific case. But that limits the range of questions you can ask because you have to curate all this data, you have to put it into structured form. And right now, what we’re worried about is, how can you answer questions more generally, about anything? And the internet is a wealth of information. The internet has got tons and tons of documents on every topic, you know, in addition to the obvious ones like Wikipedia. If you go into any enterprise domain, you’ve got manuals about how their operation works. You’ve got policy documents. You’ve got financial reports. And it’s not typical that all this information is going to be curated by somebody. It’s just sitting there in text. So how can we answer any question about anything that’s sitting in text? We don’t have a million or five million or ten million librarians doing this for us…

Host: Right.

T.J. Hazen: …uhm, but the information is there, and we need a way to get at it.

Host: Is that what you are working on?

T.J. Hazen: Yes, that’s exactly what we’re working on. I think one of the difficulties with today’s systems is, they seem really smart…

Host: Right?

T.J. Hazen: Sometimes. Sometimes they give you fantastically accurate answers. But then you can just ask a slightly different question and it can fall on its face.

Host: Right.

T.J. Hazen: That’s the real gap between what the models currently do, which is, you know, really good pattern matching some of the time, versus something that can actually understand what your question is and know when the answer that it’s giving you is correct.

Host: Let’s talk a bit about your group, which, out of Montreal, is Engineering and Applied Research. And that’s an interesting umbrella at Microsoft Research. You’re technically doing fundamental research, but your focus is a little different from some of your pure research peers. How would you differentiate what you do from others in your field?

T.J. Hazen: Well, I think there’s two aspects to this. The first is that the lab up in Montreal was created as an offshoot of an acquisition. Microsoft bought Maluuba, which was a startup that was doing really incredible deep learning research, but at the same time they were a startup and they needed to make money. So, they also had this very talented engineering team in place to be able to take the research that they were doing in deep learning and apply it to problems where it could go into products for customers.

Host: Right.

T.J. Hazen: When you think about that need that they had to actually build something, you could see why they had a strong engineering team.

Host: Yeah.

T.J. Hazen: Now, when I joined, I wasn’t with them when they were a startup, I actually joined them from Azure where I was working with outside customers in the Azure Data Science Solution team, and I observed lots of problems that our customers have. And when I saw this new team that we had acquired and we had turned into a research lab in Montreal, I said I really want to be involved because they have exactly the type of technology that can solve customer problems and they have this engineering team in place that can actually deliver on turning from a concept into something real.

Host: Right.

T.J. Hazen: So, I joined, and I had this agreement with my manager that we would focus on real problems. They were now part of the research environment at Microsoft, but I said that doesn’t restrict us on thinking about blue sky, far-afield research. We can go and talk to product teams and say what are the real problems that are hindering your products, you know, what are the difficulties you have in actually making something real? And we could focus our research to try to solve those difficult problems. And if we’re successful, then we have an immediate product that could be beneficial.

Host: Well in any case, you’re swimming someplace in a “we could do this immediately” but you have permission to take longer, or is there a mandate, as you live in this engineering and applied research group?

T.J. Hazen: I think there’s a mandate to solve hard problems. I think that’s the mandate of research. If it wasn’t a hard problem, then somebody…

Host: …would already have a product.

T.J. Hazen: …in the product team would already have a solution, right? So, we do want to tackle hard problems. But we also want to tackle real problems. That’s, at least, our focus of our team. And there’s plenty of people doing blue sky research and that’s an absolute need as well. You know, we can’t just be thinking one or two years ahead. Research should be also be thinking five, ten, fifteen years ahead.

Host: So, there’s a whole spectrum there.

T.J. Hazen: So, there’s a spectrum. But there is a real need, I think, to fill that gap between taking an idea that works well in a lab and turning it into something that works well in practice for a real problem. And that’s the key. And many of the problems that have been solved by Microsoft have not just been blue sky ideas, but they’ve come from this problem space where a real product says, ahh, we’re struggling with this. So, it could be anything. It can be, like, how does Bing efficiently rank documents over billions of documents? You don’t just solve that problem by thinking about it, you have to get dirty with the data, you have to understand what the real issues are. So, many of these research problems that we’re focusing on, and we’re focusing on, how do you answer questions out of documents when the questions could be arbitrary, and on any topic? And you’ve probably experienced this, if you are going into a search site for your company, that company typically doesn’t have the advantage of having a big Bing infrastructure behind it that’s collecting all this data and doing sophisticated machine learning. Sometimes it’s really hard to find an answer to your question. And, you know, the tricks that people use can be creative and inventive but oftentimes, trying to figure out what the right keywords are to get you to an answer is not the right thing.

Host: You work closely with engineers on the path from research to product. So how does your daily proximity to the people that reify your ideas as a researcher impact the way you view, and do, your work as a researcher?

T.J. Hazen: Well, I think when you’re working in this applied research and engineering space, as opposed to a pure research space, it really forces you to think about the practical implications of what you’re building. How easy is it going to be for somebody else to use this? Is it efficient? Is it going to run at scale? All of these problems are problems that engineers care a lot about. And sometimes researchers just say, let me solve the problem first and everything else is just engineering. If you say that to an engineer, they’ll be very frustrated because you don’t want to bring something to an engineer that works ten times slower than needs to be, uses ten times more memory. So, when you’re in close proximity to engineers, you’re thinking about these problems as you are developing your methods.

Host: Interesting, because those two things, I mean, you could come up with a great idea that would do it and you pay a performance penalty in spades, right?

T.J. Hazen: Yeah, yeah. So, sometimes it’s necessary. Sometimes you don’t know how to do it and you just say let me find a solution that works and then you spend ten years actually trying to figure out how to make it work in a real product.

Host: Right.

T.J. Hazen: And I’d rather not spend that time. I’d rather think about, you know, how can I solve something and have it be effective as soon as possible?

(music plays)

Host: Let’s talk about human language technologies. They’ve been referred to by some of your colleagues as “the crown jewel of AI.” Speech and language comprehension is still a really hard problem. Give us a lay of the land, both in the field in general and at Microsoft Research specifically. What’s hope and what’s hype, and what are the common misconceptions that run alongside the remarkable strides you actually are making?

T.J. Hazen: I think that word we mentioned already: understand. That’s really the key of it. Or comprehend is another way to say it. What we’ve developed doesn’t really understand, at least when we’re talking about general purpose AI. So, the deep learning mechanisms that people are working on right now that can learn really sophisticated things from examples. They do an incredible job of learning specific tasks, but they really don’t understand what they’re learning.

Host: Right.

T.J. Hazen: So, they can discover complex patterns that can associate things. So in the vision domain, you know, if you’re trying to identify objects, and then you go in and see what the deep learning algorithm has learned, it might have learned features that are like, uh, you know, if you’re trying to identify a dog, it learns features that would say, oh, this is part of a leg, or this is part of an ear, or this is part of the nose, or this is the tail. It doesn’t know what these things are, but it knows they all go together. And the combination of them will make a dog. And it doesn’t know what a dog is either. But the idea that you could just feed data in and you give it some labels, and it figures everything else out about how to associate that label with that, that’s really impressive learning, okay? But it’s not understanding. It’s just really sophisticated pattern-matching. And the same is true in language. We’ve gotten to the point where we can answer general-purpose questions and it can go and find the answer out of a piece of text, and it can do it really well in some cases, and like, some of the examples we’ll give it, we’ll give it “who” questions and it learns that “who” questions should contain proper names or names of organizations. And “when” questions should express concepts of time. It doesn’t know anything about what time is, but it’s figured out the patterns about, how can I relate a question like “when” to an answer that contains time expression? And that’s all done automatically. There’s no features that somebody sits down and says, oh, this is a month and a month means this, and this is a year, and a year means this. And a month is a part of a year. Expert AI systems of the past would do this. They would create ontologies and they would describe things about how things are related to each other and they would write rules. And within limited domains, they would work really, really well if you stayed within a nice, tightly constrained part of that domain. But as soon as you went out and asked something else, it would fall on its face. And so, we can’t really generalize that way efficiently. If we want computers to be able to learn arbitrarily, we can’t have a human behind the scene creating an ontology for everything. That’s the difference between understanding and crafting relationships and hierarchies versus learning from scratch. We’ve gotten to the point now where the algorithms can learn all these sophisticated things, but they really don’t understand the relationships the way that humans understand it.

Host: Go back to the, sort of, the lay of the land, and how I sharpened that by saying, what’s hope and what’s hype? Could you give us a “TBH” answer?

T.J. Hazen: Well, what’s hope is that we can actually find reasonable answers to an extremely wide range of questions. What’s hype is that the computer will actually understand, at some deep and meaningful level, what this answer actually means. I do think that we’re going to grow our understanding of algorithms and we’re going to figure out ways that we can build algorithms that could learn more about relationships and learn more about reasoning, learn more about common sense, but right now, they’re just not at that level of sophistication yet.

Host: All right. Well let’s do the podcast version of your NERD Lunch and Learn. Tell us what you are working on in machine reading comprehension, or MRC, and what contributions you are making to the field right now.

T.J. Hazen: You know, NERD is short for New England Research and Development Center

Host: I did not!

T.J. Hazen: …which is where I physically work.

Host: Okay…

T.J. Hazen: Even though I work closely and am affiliated with the Montreal lab, I work out of the lab in Cambridge, Massachusetts, and NERD has a weekly Lunch and Learn where people present the work they’re doing, or the research that they’re working on, and at one of these Lunch and Learns, I gave this talk on machine reading comprehension. Machine reading comprehension, in its simplest version, is being able to take a question and then being able to find the answer anywhere in some collection of text. As we’ve already mentioned, it’s not really “comprehending” at this point, it’s more just very sophisticated pattern-matching. But it works really well in many circumstances. And even on tasks like the Stanford Question Answering Dataset, it’s a common competition that people have competed in, question answering, by computer, has achieved a human level of parity on that task.

Host: Mm-hmm.

T.J. Hazen: Okay. But that task itself is somewhat simple because most of the questions are fact-based questions like, who did something or when did something happen? And most of the answers are fairly easy to find. So, you know, doing as well as a human on a task is fantastic, but it only gets you part of the way there. What happened is, after this was announced that Microsoft had this great achievement in machine reading comprehension, lots of customers started coming to Microsoft saying, how can we have that for our company? And this is where we’re focused right now. Like, how can we make this technology work for real problems that our enterprise customers are bringing in? So, we have customers coming in saying, I want to be able to answer any question in our financial policies, or our auditing guidelines, or our operations manual. And people don’t ask “who” or “when” questions of their operations manual. They ask questions like, how do I do something? Or explain some process to me. And those answers are completely different. They tend to be longer and more complex and you don’t always, necessarily, find a short, simple answer that’s well situated in some context.

Host: Right.

T.J. Hazen: So, our focus at MSR Montreal is to take this machine reading comprehension technology and apply it into these new areas where our customers are really expressing that there’s a need.

Host: Well, let’s go a little deeper, technically, on what it takes to enable or teach machines to answer questions, and this is key, with limited data. That’s part of your equation, right?

T.J. Hazen: Right, right. So, when we go to a new task, uh, so if a company comes to us and says, oh, here’s our operations manual, they often have this expectation, because we’ve achieved human parity on some dataset, that we can answer any question out of that manual. But when we test the general-purpose models that have been trained on these other tasks on these manuals, they don’t generally work well. And these models have been trained on hundreds of thousands, if not millions, of examples, depending on what datasets you’ve been using. And it’s not reasonable to ask a company to collect that level of data in order to be able to answer questions about their operations manual. But we need something. We need some examples of what are the types of questions, because we have to understand what types of questions they ask, we need to understand the vocabulary. We’ll try to learn what we can from the manual itself. But without some examples, we don’t really understand how to answer questions in these new domains. But what we discovered through some of the techniques that are available, transfer learning is what we refer to as sort of our model adaptation, how do you learn from data in some new domain and take an existing model and make it adapt to that domain? We call that transfer learning. We can actually use transfer learning to do really well in a new domain without requiring a ton of data. So, our goal is to have it be examples like hundreds of examples, not tens of thousands of examples.

Host: How’s that working now?

T.J. Hazen: It works surprisingly well. I’m always amazed at how well these machine learning algorithms work with all the techniques that are available now. These models are very complex. When we’re talking about our question answering model, it has hundreds of millions of parameters and what you’re talking about is trying to adjust a model that is hundreds of millions of parameters with only hundreds of examples and, through a variety of different techniques where we can avoid what we call overfitting, we can allow the generalizations that are learned from all this other data to stay in place while still adapting it so it does well in this specific domain. So, yeah, I think we’re doing quite well. We’re still exploring, you know, what are the limits?

Host: Right.

T.J. Hazen: And we’re still trying to figure out how to make it work so that an outside company can easily create the dataset, put the dataset into a system, push a button. The engineering for that and the research for that is still ongoing, but I think we’re pretty close to being able to, you know, provide a solution for this type of problem.

Host: All right. Well I’m going to push in technically because to me, it seems like that would be super hard for a machine. We keep referring to these techniques… Do we have to sign an NDA, as listeners?

T.J. Hazen: No, no. I can explain stuff that’s out…

Host: Yeah, do!

T.J. Hazen: … in the public domain. So, there are two common underlying technical components that make this work. One is called word embeddings and the other is called attention. Word embeddings are a mechanism where it learns how to take words or phrases and express them in what we call vector space.

Host: Okay.

T.J. Hazen: So, it turns them into a collection of numbers. And it does this by figuring out what types of words are similar to each other based on the context that they appear in, and then placing them together in this vector space, so they’re nearby each other. So, we would learn, that let’s say, city names are all similar because they appear in similar contexts. And so, therefore, Boston and New York and Montreal, they should all be close together in this vector space.

Host: Right.

T.J. Hazen: And blue and red and yellow should be close together. And then advances were made to figure this out in context. So that was the next step, because some words have multiple meanings.

Host: Right.

T.J. Hazen: So, you know, if you have a word like apple, sometimes it refers to a fruit and it should be near orange and banana, but sometimes it refers to the company and it should be near Microsoft and Google. So, we’ve developed context dependent ones, so that says, based on the context, I’ll place this word into this vector space so it’s close to the types of things that it really represents in that context.

Host: Right.

T.J. Hazen: That’s the first part. And you can learn these word embeddings from massive amounts of data. So, we start off with a model that’s learned on far more data than we actually have question and answer data for. The second part is called attention and that’s how you associate things together. And it’s the attention mechanisms that learn things like a word like “who” has to attend to words like person names or company names. And a word like “when” has to attend to…

Host: Time.

T.J. Hazen: …time. And those associations are learned through this attention mechanism. And again, we can actually learn on a lot of associations between things just from looking at raw text without actually having it annotated.

Host: Mm-hmm.

T.J. Hazen: Once we’ve learned all that, we have a base, and that base tells us a lot about how language works. And then we just have to have it focus on the task, okay? So, depending on the task, we might have a small amount of data and we feed in examples in that small amount, but it takes advantage of all the stuff that it’s learned about language from all these, you know, rich data that’s out there on the web. And so that’s how it can learn these associations even if you don’t give it examples in your domain, but it’s learned a lot of these associations from all the raw data.

Host: Right.

T.J. Hazen: And so, that’s the base, right? You’ve got this base of all this raw data and then you train a task-specific thing, like a question answering system, but even then, what we find is that, if we train a question answering system on basic facts, it doesn’t always work well when you go to operation manuals or other things. So, then we have to have it adapt.

Host: Sure.

T.J. Hazen: But, like I said, that base is very helpful because it’s already learned a lot of characteristics of language just by observing massive amounts of text.

(music plays)

Host: I’d like you to predict the future. No pressure. What’s on the horizon for machine reading comprehension research? What are the big challenges that lie ahead? I mean, we’ve sort of laid the land out on what we’re doing now. What next?

T.J. Hazen: Yeah. Well certainly, more complex questions. What we’ve been talking about so far is still fairly simple in the sense that you have a question, and we try to find passages of text that answer that question. But sometimes a question actually requires that you get multiple pieces of evidence from multiple places and you somehow synthesize them together. So, a simple example we call the multi-hop example. If I ask a question like, you know, where was Barack Obama’s wife born? I have to figure out first, who is Barack Obama’s wife? And then I have to figure out where she was born. And those pieces of information might be in two different places.

Host: Right.

T.J. Hazen: So that’s what we call a multi-hop question. And then, sometimes, we have to do some operation on the data. So, you could say, you know like, what players, you know, from one Super Bowl team also played on another Super Bowl team? Well there, what you have to do is, you have to get the list of all the players from both teams and then you have to do an intersection between them to figure out which ones are the same on both. So that’s an operation on the data…

Host: Right.

T.J. Hazen: …and you can imagine that there’s lots of questions like that where the information is there, but it’s not enough to just show the person where the information is. You also would like to go a step further and actually do the computation for that. That’s a step that we haven’t done, like, how do you actually go from mapping text to text, and saying these two things are associated, to mapping text to some sequence of operations that will actually give you an exact answer. And, you know, it can be quite difficult. I can give you a very simple example. Like, just answering a question, yes or no, out of text, is not a solved problem. Let’s say I have a question where someone says, I’m going to fly to London next week. Am I allowed to fly business class according to my policies from my company, right? We can have a system that would be really good at finding the section of the policy that says, you know, if you are a VP-level or higher and you are flying overseas, you can fly business class, otherwise, no. Okay? But, you know, if we actually want the system to answer yes or no, we have to actually figure out all the details, like okay, who’s asking the question? Are they a VP? Where are they located? Oh, they’re in New York. What does flying overseas mean??

Host: Right. They’re are layers.

T.J. Hazen: Right. So that type of comprehension, you know, we’re not quite there yet for all types of questions. Usually these things have to be crafted by hand for specific domains. So, all of these things about how can you answer complex questions, and even simple things like common sense, like, things that we all know… Um. And so, my manager, Andrew McNamara, he was supposed to be here with us, one of his favorite examples is this concept of coffee being black. But if you spill coffee on your shirt, do you have a black stain on your shirt? No, you’ve got a brown stain on your shirt. And that’s just common knowledge. That is, you know, a common-sense thing that computers may not understand.

Host: You’re working on research, and ultimately products or product features, that make people think they can talk to their machines and that their machines can understand and talk back to them. So, is there anything you find disturbing about this? Anything that keeps you up at night? And if so, how are you dealing with it?

T.J. Hazen: Well, I’m certainly not worried about the fact that people can ask questions of the computer and the computer can give them answers. What I’m trying to get at is something that’s helpful and can help you solve tasks. In terms of the work that we do, yeah, there are actually issues that concern me. So, one of the big ones is, even if a computer can say, oh, I found a good answer for you, here’s the answer, it doesn’t know anything about whether that answer is true. If you go and ask your computer, was the Holocaust real? and it finds an article on the web that says no, the Holocaust was a hoax, do I want my computer to show that answer? No, I don’t. But…

Host: Or the moon landing…!

T.J. Hazen: …if all you are doing is teaching the computer about word associations, it might think that’s a perfectly reasonable answer without actually knowing that this is a horrible answer to be showing. So yeah, the moon landing, vaccinations… The easy way that people can defame people on the internet, you know, even if you ask a question that might seem like a fact-based question, you can get vast differences of opinion on this and you can get extremely biased and untrue answers. And how does a computer actually understand that some of these things are not things that we should represent as truth, right? Especially if your goal is to find a truthful answer to a question.

Host: All right. So, then what do we do about that? And by we, I mean you!

T.J. Hazen: Well, I have been working on this problem a little bit with the Bing team. And one of the things that we discovered is that if you can determine that a question is phrased in a derogatory way, that usually means the search results that you’re going to get back are probably going to be phrased in a derogatory way. So, even if we don’t understand the answer, we can just be very careful about what types of questions we actually want to answer.

Host: Well, what does the world look like if you are wildly successful?

T.J. Hazen: I want the systems that we build to just make life easier for people. If you have an information task, the world is successful if you get that piece of information and you don’t have to work too hard to get it. We call it task completion. If you have to struggle to find an answer, then we’re not successful. But if you can ask a question, and we can get you the answer, and you go, yeah, that’s the answer, that’s success to me. And we’ll be wildly successful if the types of things where that happens become more and more complex. You know, where if someone can start asking questions where you are synthesizing data and computing answers from multiple pieces of information, for me, that’s the wildly successful part. And we’re not there yet with what we’re going to deliver into product, but it’s on the research horizon. It will be incremental. It’s not going to happen all at once. But I can see it coming, and hopefully by the time I retire, I can see significant progress in that direction.

Host: Off script a little… will I be talking to my computer, my phone, a HoloLens? Who am I asking? Where am I asking? What device? Is that so “out there” as well?

T.J. Hazen: Uh, yeah, I don’t know how to think about where devices are going. You know, when I was a kid, I watched the original Star Trek, you know, and everything on there, it seemed like a wildly futuristic thing, you know? And then fifteen, twenty years later, everybody’s got their own little “communicator.”

Host: Oh my gosh.

T.J. Hazen: And so, uh, you know, the fact that we’re now beyond where Star Trek predicted we would be, you know, that itself, is impressive to me. So, I don’t want to speculate where the devices are going. But I do think that this ability to answer questions, it’s going to get better and better. We’re going to be more interconnected. We’re going to have more access to data. The range of things that computers will be able to answer is going to continue to expand. And I’m not quite sure exactly what it looks like in the future, to be honest, but, you know, I know it’s going to get better and easier to get information. I’m a little less worried about, you know, what the form factor is going to be. I’m more worried about how I’m going to actually answer questions reliably.

Host: Well it’s story time. Tell us a little bit about yourself, your life, your path to MSR. How did you get interested in computer science research and how did you land where you are now working from Microsoft Research in New England for Montreal?

T.J. Hazen: Right. Well, I’ve never been one to long-term plan for things. I’ve always gone from what I find interesting to the next thing I find interesting. I never had a really serious, long-term goal. I didn’t wake up some morning when I was seven and say, oh, I want to be a Principal Research Manager at Microsoft in my future! I didn’t even know what Microsoft was when I was seven. I went to college and I just knew I wanted to study computers. I didn’t know really what that meant at the time, it just seemed really cool.

Host: Yeah.

T.J. Hazen: I had an Apple II when I was a kid and I learned how to do some basic programming. And then I, you know, was going through my course work. I was, in my junior year, I was taking a course in audio signal processing and in the course of that class, we got into a discussion about speech recognition, which to me was, again, it was Star Trek. It was something I saw on TV. Of course, now it was Next Generation….!

Host: Right!

T.J. Hazen: But you know, you watch the next generation of Star Trek and they’re talking to the computer and the computer is giving them answers and here somebody is telling me you know there’s this guy over in the lab for computer science, Victor Zue, and he’s building systems that recognize speech and give answers to questions! And to me, that was science-fiction. So, I went over and asked the guy, you know, I heard you’re building a system, and can I do my bachelor’s thesis on this? And he gave me a demo of the system – it was called Voyager – and he asked a question, I don’t remember the exact question, but it was probably something like, show me a map of Harvard Square. And the system starts chugging along and it’s showing results on the screen as it’s going. And it literally took about two minutes for it to process the whole thing. It was long enough that he actually explained to me how the entire system worked while it was processing. But then it came back, and it popped up a map of Harvard Square on the screen. And I was like, ohhh my gosh, this is so cool, I have to do this! So, I did my bachelor’s thesis with him and then I stayed on for graduate school. And by seven years later, we had a system that was running in real time. We had a publicly available system in 1997 that you could call up on a toll-free number and you could ask for weather reports and weather information for anywhere in the United States. And so, the idea that it went from something that was “Star Trek” to something that I could pick up my phone, call a number and, you know, show my parents, this is what I’m working on, it was astonishing how fast that developed! I stayed on in that field with that research group. I was at MIT for another fifteen years after I graduated. At some point, a lot of the things that we were doing, they moved from the research lab to actually being real.

Host: Right.

T.J. Hazen: So, like twenty years after I went and asked to do my bachelor’s thesis, Siri comes out, okay? And so that was our goal. They were like, twenty years ago, we should be able to have a device where you can talk to it and it gives you answers and twenty years later there it was. So, that, for me, that was a queue that maybe it’s time to go where the action is, which was in companies that were building these things. Once you have a large company like Microsoft or Google throwing their resources behind these hard problems, then you can’t compete when you’re in academia for that space. You know, you have to move on to something harder and more far out. But I still really enjoyed it. So, I joined Microsoft to work on Cortana…

Host: Okay…

T.J. Hazen: …when we were building the first version of Cortana. And I spent a few years working on that. I’ve worked on some Bing products. I then spent some time in Azure trying to transfer these things so that companies that had the similar types of problems could solve their problems on Azure with our technology.

Host: And then we come full circle to…

T.J. Hazen: Then full circle, yeah. You know, once I realized that some of the stuff that customers were asking for wasn’t quite ready yet, I said, let me go back to research and see if I can improve that. It’s fantastic to see something through all the way to product, but once you’re successful and you have something in a product, it’s nice to then say, okay, what’s the next hard problem? And then start over and work on the next hard problem.

Host: Before we wrap up, tell us one interesting thing about yourself, maybe it’s a trait, a characteristic, a life event, a side quest, whatever… that people might not know, or be able to find on a basic web search, that’s influenced your career as a researcher?

T.J. Hazen: Okay. You know, when I was a kid, maybe about eleven years old, the Rubik’s Cube came out. And I got fascinated with it. And I wanted to learn how to solve it. And a kid down the street from my cousin had taught himself from a book how to solve it. And he taught me. His name was Jonathan Cheyer. And he was actually in the first national speed Rubik’s Cube solving competition. It was on this TV show, That’s Incredible. I don’t know if you remember that TV show.

Host: I do.

T.J. Hazen: It turned out what he did was, he had learned what is now known as the simple solution. And I learned it from him. And I didn’t realize it until many years later, but what I learned was an algorithm. I learned, you know, a sequence of steps to solve a problem. And once I got into computer science, I discovered all that problem-solving I was doing with the Rubik’s Cube and figuring out what are the steps to solve a problem, that’s essentially what things like machine learning are doing. What are the steps to figure out, what are the features of something, what are the steps I have to do to solve the problem? I didn’t realize that at the time, but the idea of being able to break down a hard problem like solving a Rubik’s Cube, and figuring out what are the stages to get you there, is interesting. Now, here’s the interesting fact. So, Jonathan Cheyer, his older brother is Adam Cheyer. Adam Cheyer is one of the co-founders of Siri.

Host: Oh my gosh. Are you kidding me?

T.J. Hazen: So, I met the kid when I was young, and we didn’t really stay in touch. I discovered, you know, many years later that Adam Cheyer was actually the older brother of this kid who taught me the Rubik’s Cube years and years earlier, and Jonathan ended up at Siri also. So, it’s an interesting coincidence that we ended up working in the same field after all those years from this Rubik’s Cube connection!

Host: You see, this is my favorite question now because I’m getting the broadest spectrum of little things that influenced and triggered something…!

Host: At the end of every podcast, I give my guests a chance for the proverbial last word. Here’s your chance to say anything you want to would-be researchers, both applied and other otherwise, who might be interested in working on machine reading comprehension for real-world applications.

T.J. Hazen: Well, I could say all the things that you would expect me to say, like you should learn about deep learning algorithms and you should possibly learn Python because that’s what everybody is using these days, but I think the single most important thing that I could tell anybody who wants to get into a field like this is that you need to explore it and you need to figure out how it works and do something in depth. Don’t just get some instruction set or some high-level overview on the internet, run it on your computer and then say, oh, I think I understand this. Like get into the nitty-gritty of it. Become an expert. And the other thing I could say is, of all the people I’ve met who are extremely successful, the thing that sets them apart isn’t so much, you know, what they learned, it’s the initiative that they took. So, if you see a problem, try to fix it. If you see a problem, try to find a solution for it. And I say this to people who work for me. If you really want to have an impact, don’t just do what I tell you to do, but explore, think outside the box. Try different things. OK? I’m not going to have the answer to everything, so therefore, if I don’t have the answer to everything, then if you’re only doing what I’m telling you to do, then we both, together, aren’t going to have the answer. But if you explore things on your own and take the initiative and try to figure out something, that’s the best way to really be successful.

Host: T.J. Hazen, thanks for coming in today, all the way from the east coast to talk to us. It’s been delightful.

T.J. Hazen: Thank you. It’s been a pleasure.

(music plays)

To learn more about Dr. T.J. Hazen and how researchers and engineers are teaching machines to answer complicated questions, visit Microsoft.com/research

Go to Original Article
Author: Microsoft News Center

Heineken’s Athina Syrrou and Microsoft’s Brad Anderson talk Teams in ‘The Shiproom’ | Transform

In this episode of “The Shiproom,” Athina Syrrou, who leads collaboration and end user devices for Heineken, joins Microsoft’s Brad Anderson, corporate vice president of Microsoft 365, to discuss what got Heineken interested in using Microsoft Teams and what they’ve learned about it since beginning the pilot – including how to introduce and adopt it efficiently.

Syrrou explains how she chooses the tools she provides to her global workforce, and how she uses the cloud to give her users maximum flexibility to choose the apps and devices they need.  She also schools Anderson on how to use common Greek idioms around the office (which explains why he’s recently been mumbling things about roller skates, chair legs and ducks).

Other discussion topics: The superiority of Greek yogurt, the perfect beer to pair with cereal, the benefits of moving to Intune, elephants and how deploying Microsoft 365 gives users the flexibility needed to do their best work and enable BYOD.

Stop by The Shiproom on YouTube to view more episodes. To learn how you can shift to a modern desktop with Microsoft 365, visit Microsoft365.com/Shift.

Go to Original Article
Author: Microsoft News Center

Inside Xbox Episode 5 News Recap – Xbox Wire

Earlier today Inside Xbox Episode 5 aired, continuing to pull back the curtain on Team Xbox to celebrate our games, features, and fans. This episode was full of closer looks at some big upcoming games, including No Man’s Sky, We Happy Few, and Earthfall, as well as the announcement of a huge addition to Xbox Game Pass. So, without further ado, let’s take a closer look at some of the biggest news coming out of this month’s episode of Inside Xbox.

Rocket League and Warhammer: Vermintide 2 are Coming to Xbox Game Pass

The Xbox Game Pass catalog continues to grow this week, thanks to the addition of a couple of awesome titles. First up, the much-loved Rocket League, which blends elements of soccer, racing, and demolition derbies together to create a wonderful whole, hits Xbox Game Pass today. Then, tomorrow, Warhammer: Vermintide 2 brings its mix of over-the-top gore and first-person hacking and slashing to the service.

The Sport White Special Edition Xbox One Controller

Featuring beautiful, clean lines and a snazzy design, the latest addition to the Xbox One controller family is a looker. Inspired by sports and sneakers, the Sport White’s got mint green accents and grey and silver patterns to go along with its fresh white design. If you’re a sneaker head, you’ll definitely want one of these. You can snag this sporty beauty at the Microsoft Store and other retailers beginning July 31st in the U.S. and Canada, and then worldwide on August 7th.

A Closer Look at No Man’s Sky

The highly-anticipated space exploration game No Man’s Sky is hitting Xbox One on July 24, so we had Hello Games founder Sean Murray on to share a bit about how excited the team is to be bringing the game to our consoles. He also showed a new video created by the team that breaks down 11 new features, from freighters to alien sidekicks, added to the game since its initial launch, all of which will be available when the game launches on Xbox One.

We Happy Few Adds a Story Mode

The Inside Xbox team was joined by Guillaume Provost from Compulsion Games, the latest studio to join the Microsoft Studios family. Guillaume showed off We Happy Few’s new story mode for the first time, sharing that you’ll be able to see events in the game from multiple perspectives as you play. This is going to be one wild ride, and we can’t wait to see more when the game releases in August.

Surviving an Alien Invasion in Earthfall

Coming to join us from their studio just up the road in Bellevue, the team from developer Holospark gave us a closer look at the upcoming game Earthfall, which releases this coming Friday, July 13. Earthfall is a four-player co-op shooter that tasks players with surviving an alien invasion, and it looks like a blast. Even better, the guys announced that all maps and additions to the game will be absolutely free to anyone who purchases it. There will also be Mixer integration, so save up that Spark to help your friends!

Seasons Change in Forza Horizon 4

To close out the show, the team was joined by some familiar faces from Playground Games, who came on to give fans a closer at this highly-anticipated (and absolutely gorgeous) Xbox One racing game. This segment lead into a special live-stream on mixer.com/forzamotorsport , where the team at Playground Games highlighted the summer season, including interviews with the team and community Q&A.

Thanks to everyone who tuned in! We hope you enjoyed the show and we can’t wait to tell you all about next month’s episode in a few weeks.

Episode 5: Norway – Iris Classon

Become an MVP Today!

In this episode of the MVP Show, Seth met with Iris Classon in her hometown of Stavanger, Norway.  At a quaint coffee shop in the charming Fargegaten/Øvre Holmegate (Upper Holm Street/Color Street), Iris talked about microservice deployment models, security, and multitenant authentication. After talking about the cloud, Seth felt awe-inspired to deploy to the cloud himself! Up to the clouds above, Iris took Seth to Preikestolen (Pulpit Rock) where they talked about her life before becoming a developer, some of her first projects, and Seth’s fitness abilities.

Follow @Ch9

Follow @SethJuarez

Follow @IrisClasson

Follow @MVP

Saturday Night Live’s Michael Che and Colin Jost to Join Xbox Live Sessions

Hello Xbox gamers! We’re excited to announce another episode of Xbox Live Sessions, this one taking place this upcoming Monday night, October 30 at 5 p.m. PT. “Saturday Night Live” cast members Colin Jost and Michael Che will take a break from their hectic schedules to join Xbox Live Sessions, an interactive livestream hosted on the Mixer Xbox Channel, to play the recently released Wolfenstein II: The New Colossus.

We’re looking forward to hosting Colin and Michael, who are both no strangers to gaming, to an action-filled night playing Wolfenstein II, which focuses on trying to overthrow the Nazi occupation in America. The New Colossus, which is now available on Xbox One, has welcomed critical acclaim for its blend of first-person action, adventure, and storytelling.

We can’t wait to have these talented comedians and cast mates join Xbox Live Sessions, which we’re anticipating on being an entertaining livestream. Also, be sure to tune in for the chance to submit questions as well as win special prizes. This episode of Xbox Live Sessions will be hosted by Microsoft Studios Community Manager, Rukari Austin.

See you on Monday, October 30 at 5 p.m. PT on Mixer and stay tuned to Xbox Wire for future episodes of Xbox Live Sessions!

Securing a Digital Battlefield – .future podcast #1 – YouTube

Can a Digital Geneva Convention help us prevent cyberattacks?

This episode features:
Steven Petrow — a journalist who writes about digital life 
Scott Charney — a security expert at Microsoft
Cyrus Farivar — an editor at Ars Technica
Brad Smith — Microsoft’s president and chief legal officer
Heidi Tworek — a professor who writes about the history of media and technology

Learn more at http://microsoft.com/storylabs