Tag Archives: Chief

Atlassian CISO Adrian Ludwig shares DevOps security outlook

BOSTON — Atlassian chief information security officer and IT industry veteran Adrian Ludwig is well aware of a heightened emphasis on DevOps security among enterprises heading into 2020 and beyond, and he believes that massive consolidation between DevOps and cybersecurity toolsets is nigh.

Ludwig, who joined Atlassian in May 2018, previously worked at Nest, Macromedia, Adobe and Google’s Android, as well as the U.S. Department of Defense. Now, he supervises Atlassian’s corporate security, including its cloud platforms, and works with the company’s product development teams on security feature improvements.

Atlassian has also begun to build DevOps security features into its Agile collaboration and DevOps tools for customers who want to build their own apps with security in mind. Integrations between Jira Service Desk and Jira issue tracking tools, for example, automatically notify development teams when security issues are detected, and the roadmap for Jira Align (formerly AgileCraft) includes the ability to track code quality, privacy and security on a story and feature level.

However, according to Ludwig, the melding of DevOps and IT security tooling, along with their disciplines, must be much broader and deeper in the long run. SearchSoftwareQuality caught up with him at the Atlassian Open event here to talk about his vision for the future of DevOps security, how it will affect Atlassian, and the IT software market at large.

SearchSoftwareQuality: We’re hearing more about security by design and applications security built into the DevOps process. What might we expect to see from Atlassian along those lines?

Ludwig: As a security practitioner, probably the most alarming factoid about security — and it gets more alarming every year — is the number of open roles for security professionals. I remember hearing at one point it was a million, and somebody else was telling me that they had found 3 million. So there’s this myth that people are going to be able to solve security problems by having more people in that space.

And an area that has sort of played into that myth is around tooling for the creation of secure applications. And a huge percentage of the current security skills gap is because we’re expecting security practitioners to find those tools, integrate those tools and monitor those tools when they weren’t designed to work well together.

Adrian LudwigAdrian Ludwig

It’s currently ridiculously difficult to build software securely. Just to think about what it means in the context of Atlassian, we have to license tools from half a dozen different vendors and integrate them into our environment. We have to think about how results from those tools flow into the [issue] resolution process. How do you bind it into Jira, so you can see the tickets, so you can get it into the hands of the developer? How do you make sure that test cases associated with fixing those issues are incorporated into your development pipeline? It’s a mess.

My expectation is that the only way we’ll ever get to a point where software can be built securely is if those capabilities are incorporated directly into the tools that are used to deliver it, as opposed to being add-ons that come from third parties.

SSQ: So does that include Atlassian?

Ludwig: I think it has to.

SSQ: What would that look like?

Ludwig: One of the areas that my team has been building something like that is around the way that we monitor our security investigations. We’ve actually released some open source projects in this area, where the way that we create alerts for Splunk, which we use as our SIEM, is tied into Jira tickets and Confluence pages. When we create alerts, a Confluence page is automatically generated, and it generates Jira tickets that then flow to our analysts to follow up on them. And that’s actually tied in more broadly to our overall risk management system.

We are also working on some internal tools to make it easier for us to connect the third-party products that look for security vulnerabilities directly into Bitbucket. Every single time we do a pull request, source code analysis runs. And it’s not just a single piece of source code analysis; it’s a wide range of them. Is that particular pull request referencing any out-of-date libraries? And dependencies that need to be updated? And then those become comments that get added into the peer review process.

My job is to make sure that we ship the most secure software that we possibly can, and if there are commercial opportunities, which I think there are, then it seems natural that we might do those as well.
Adrian LudwigCISO, Atlassian

It’s not something that we’re currently making commercially available, nor do we have specific plans at this point to do that, so I’m not announcing anything. But that’s the kind of thing that we are doing. My job is to make sure that we ship the most secure software that we possibly can, and if there are commercial opportunities, which I think there are, then it seems natural that we might do those as well.

SSQ: What does that mean for the wider market as DevOps and security tools converge?

Ludwig: Over the next 10 years, there’s going to be massive consolidation in that space. That trend is one that we’ve seen other places in the security stack. For example, I came from Android. Android now has primary responsibility, as a core platform capability, for all of the security of that device. Your historical desktop operating systems? Encryption was an add-on. Sandboxing was an add-on. Monitoring for viruses was an add-on. Those are all now part of the mobile OS platform.

If you look at the antivirus vendors, you’ve seen them stagnate, and they didn’t have an off-road onto mobile. I think it’s going to be super interesting to watch a lot of the security investments made over the last 10 years, especially in developer space, and think through how that’s going to play out. I think there’s going to be consolidation there. It’s all converging, and as it converges, a lot of stuff’s going to die.

Go to Original Article
Author:

Data Center Scale Computing and Artificial Intelligence with Matei Zaharia, Inventor of Apache Spark

Matei Zaharia, Chief Technologist at Databricks & Assistant Professor of Computer Science at Stanford University, in conversation with Joseph Sirosh, Chief Technology Officer of Artificial Intelligence in Microsoft’s Worldwide Commercial Business


At Microsoft, we are privileged to work with individuals whose ideas are blazing a trail, transforming entire businesses through the power of the cloud, big data and artificial intelligence. Our new “Pioneers in AI” series features insights from such pathbreakers. Join us as we dive into these innovators’ ideas and the solutions they are bringing to market. See how your own organization and customers can benefit from their solutions and insights.

Our first guest in the series, Matei Zaharia, started the Apache Spark project during his PhD at the University of California, Berkeley, in 2009. His research was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in Computer Science. He is a co-founder of Databricks, which offers a Unified Analytics Platform powered by Apache Spark. Databricks’ mission is to accelerate innovation by unifying data science, engineering and business. Microsoft has partnered with Databricks to bring you Azure Databricks, a fast, easy, and collaborative Apache Spark based analytics platform optimized for Azure. Azure Databricks offers one-click set up, streamlined workflows and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts to generate great value from data faster.

So, let’s jump right in and see what Matei has to say about Spark, machine learning, and interesting AI applications that he’s encountered lately.

Video and podcast versions of this session are available at the links below. The podcast is also available from your Spotify app and via Stitcher. Alternatively, just continue reading the text version of their conversation below, via this blog post.

Joseph Sirosh: Matei, could you tell us a little bit about how you got started with Spark and this new revolution in analytics you are driving?

Matei Zaharia: Back in 2007, I started doing my PhD at UC Berkeley and I was very interested in data center scale computing, and we just saw at the time that there was an open source MapReduce implementation in Apache Hadoop, so I started early on by looking at that. Actually, the first project was profiling Hadoop workloads to identify some bottlenecks and, as part of that, we made some improvements to the Hadoop job scheduler and that actually went into Hadoop and I started working with some of the early users of that, especially Facebook and Yahoo. And what we saw across all of these is that this type of large data center scale computing was very powerful, there were a lot of interesting applications they could do with them, but just the map-reduce programming model alone wasn’t really sufficient, especially for machine learning – that’s something everyone wanted to do where it wasn’t a good fit but also for interactive queries and streaming and other workloads.

So, after seeing this for a while, the first project we built was the Apache Mesos cluster manager, to let you run other types of computations next to Hadoop. And then we said, you know, we should try to build our own computation engine which ended up becoming Apache Spark.

JS: What was unique about Spark?

MZ: I think there were a few interesting things about it. One of them was that it tried to be a general or unified programming model that can support many types of computations. So, before the Spark project, people wanted to do these different computations on large clusters and they were designing specialized engines to do particular things, like graph processing, SQL custom code, ETL which would be map-reduce, they were all separate projects and engines. So in Spark we kind of stepped back at them and looked at these and said is there any way we can come up with a common abstraction that can handle these workloads and we ended up with something that was a pretty small change to MapReduce – MapReduce plus fast data sharing, which is the in-memory RDDs in Spark, and just hooking these up into a graph of computations turned out to be enough to get really good performance for all the workloads and matched the specialized engines, and also much better performance if your workload combines a bunch of steps. So that is one of the things.

I think the other thing which was important is, having a unified engine, we could also have a very composable API where a lot of the things you want to use would become libraries, so now there are hundreds maybe thousands of third party packages that you can use with Apache Spark which just plug into it that you can combine into a workflow. Again, none of the earlier engines had focused on establishing a platform and an ecosystem but that’s why it’s really valuable to users and developers, is just being able to pick and choose libraries and arm them.

JS: Machine Learning is not just one single thing, it involves so many steps. Now Spark provides a simple way to compose all of these through libraries in a Spark pipeline and build an entire machine learning workflow and application. Is that why Spark is uniquely good at machine learning?

MZ: I think it’s a couple of reasons. One reason is much of machine learning is preparing and understanding the data, both the input data and also actually the predictions and the behavior of the model, and Spark really excels at that ad hoc data processing using code – you can use SQL, you can use Python, you can use DataFrames, and it just makes those operations easy, and, of course, all the operations you do also scale to large datasets, which is, of course, important because you want to train machine learning on lots of data.

Beyond that, it does support iterative in-memory computation, so many algorithms run pretty well inside it, and because of this support for composition and this API where you can plug in libraries, there are also quite a few libraries you can plug in that call external compute engines that are optimized to do different types of numerical computation.

JS: So why didn’t some of these newer deep learning toolsets get built on top of Spark? Why were they all separate?

MZ: That’s a good question. I think a lot of the reason is probably just because people, you know, just started with a different programming language. A lot of these were started with C++, for example, and of course, they need to run on the GPU using CUDA which is much easier to do from C++ than from Java. But one thing we’re seeing is really good connectors between Spark and these tools. So, for example, TensorFlow has a built-in Spark connector that can be used to get data from Spark and convert it to TFRecords. It also actually connects to HDFS and different sorts of big data file systems. At the same time, in the Spark community, there are packages like deep learning pipelines from Databricks and quite a few other packages as well that let you setup a workflow of steps that include these deep learning engines and Spark processing steps.

“None of the earlier engines [prior to Apache Spark] had focused on establishing a platform and an ecosystem.”

JS: If you were rebuilding these deep learning tools and frameworks, would you recommend that people build it on top of Spark? (i.e. instead of the current approach, of having a tool, but they have an approach of doing distributed computing across GPUs on their own.)

MZ: It’s a good question. I think initially it was easier to write GPU code directly, to use CUDA and C++ and so on. And over time, actually, the community has been adding features to Spark that will make it easier to do that in there. So, there’s definitely been a lot of proposals and design to make GPU a first-class resource. There’s also this effort called Project Hydrogen which is to change the scheduler to support these MPI-like batch jobs. So hopefully it will become a good platform to do that, internally. I think one of the main benefits of that is again for users that they can either program in one programming language, they can learn just one way to deploy and manage clusters and it can do deep learning and the data preprocessing and analytics after that.

JS: That’s great. So, Spark – and Databricks as commercialized Spark – seems to be capable of doing many things in one place. But what is not good at? Can you share some areas where people should not be stretching Spark?

MZ: Definitely. One of the things it doesn’t do, by design, is it doesn’t do transactional workloads where you have fine grained updates. So, even though it might seem like you can store a lot of data in memory and then update it and serve it, it is not really designed for that. It is designed for computations that have a large amount of data in each step. So, it could be streaming large continuous streams, or it could be batch but is it not these point queries.

And I would say the other thing it does not do it is doesn’t have a built-in persistent storage system. It is designed so it’s just a compute engine and you can connect it to different types of storage and that actually makes a lot of sense, especially in the cloud, with separating compute and storage and scaling them independently. But it is different from, you know, something like a database where the storage and compute are co-designed to live together.

JS: That makes sense. What do you think of frameworks like Ray for machine learning?

MZ: There are lot of new frameworks coming out for machine learning and it’s exciting to see the innovation there, both in the programming models, the interface, and how to work with it. So I think Ray has been focused on reinforcement learning which is where one of the main things you have to do is spawn a lot of little independent tasks, so it’s a bit different from a big data framework like Spark where you’re doing one computation on lots of data – these are separate computations that will take different amounts of time, and, as far as I know, users are starting to use that and getting good traction with it. So, it will be interesting to see how these things come about.

I think the thing I’m most interested in, both for Databricks products and for Apache Spark, is just enabling it to be a platform where you can combine the best algorithms, libraries and frameworks and so on, because that’s what seems to be very valuable to end users, is they can orchestrate a workflow and just program it as easily as writing a single machine application where you just import a bunch of libraries.

JS: Now, stepping back, what do you see as the most exciting applications that are happening in AI today?

MZ: Yeah, it depends on how recent. I mean, in the past five years, deep learning is definitely the thing that has changed a lot of what we can do, and, in particular, it has made it much easier to work with unstructured data – so images, text, and so on. So that is pretty exciting.

I think, honestly, for like wide consumption of AI, the cloud computing AI services make it significantly easier. So, I mean, when you’re doing machine learning AI projects, it’s really important to be able to iterate quickly because it’s all about, you know, about experimenting, about finding whether something will work, failing fast if a particular idea doesn’t work. And I think the cloud makes it much easier.

JS: Cloud AI is super exciting, I completely agree. Now, at Stanford, being a professor, you must see a lot of really exciting pieces of work that are going on, both at Stanford and at startups nearby. What are some examples?

MZ: Yeah, there are a lot of different things. One of the things that is really useful for end users is all the work on transfer learning, and in general all the work that lets you get good results with AI using smaller training datasets. There are other approaches as well like weak supervision that do that as well. And the reason that’s important is that for web-scale problems you have lot of labeled data, so for something like web search you can solve it, but for many scientific or business problems you don’t have that, and so, how can you learn from a large dataset that’s not quite in your domain like the web and then apply to something like, say, medical images, where only a few hundred patients have a certain condition so you can’t get a zillion images. So that’s where I’ve seen a lot of exciting stuff.

But yeah, there’s everything from new hardware for machine learning where you throw away the constraints that the computation has to be precise and deterministic, to new applications, to things like, for example security of AI, adversarial examples, verifiability, I think they are all pretty interesting things you can do.

JS: What are some of the most interesting applications you have seen of AI?

MZ: So many different applications to start with. First of all, we’ve seen consumer devices that bring AI into every home, or every phone, or every PC – these have taken off very quickly and it’s something that a large fraction of customers use, so that’s pretty cool to see.

In the business space, probably some of the more exciting things are actually dealing with image data, where, using deep learning and transfer learning, you can actually start to reliably build classifiers for different types of domain data. So, whether it’s maps, understanding satellite images, or even something as simple as people uploading images of a car to a website and you try to give feedback on that so it’s easier to describe it, a lot of these are starting to happen. So, it’s kind of a new class of data, visual data – we couldn’t do that much with it automatically before, and now you can get both like little features and big products that use it.

JS: So what do you see as the future of Databricks itself? What are some of the innovations you are driving?

MZ: Databricks, for people not familiar, we offer basically, a Unified Analytics Platform, where you can work with big data mostly through Apache Spark and collaborate with it in an organization, so you can have different people, developing say notebooks to perform computations, you can have people developing production jobs, you can connect these together into workflows, and so on.

So, we’re doing a lot of things to further expand on that vision. One of the things that we announced recently is what we call machine learning runtime where we have preinstalled versions of popular machine learning libraries like XGBoost or TensorFlow or Horovod on your Databricks cluster, so you can set those up as easily as you can set up as easily as you can setup an Apache Spark cluster in the past. And then another product that we featured a lot at our Spark Summit conference this year is Databricks Delta which is basically a transactional data management layer on top of cloud objects stores that lets us do things like indexing, reliable exactly once stream processing, and so on at very massive scale, and that’s a problem that all our users have, because all our users have to setup a reliable data ingest pipeline.

JS: Who are some of the most exciting customers of Databricks and what are they doing?

MZ: There are a lot of really interesting customers doing pretty cool things. So, at our conference this year, for example, one of the really cool presentations we saw was from Apple. So, Apple’s internal information security group – this is the group that does network monitoring, basically gets hundreds of terabytes of network events per day to process, to detect intrusions and information security problems. They spoke about using Databricks Delta and streaming with Apache Spark to handle all of that – so it’s one of the largest applications people have talked about publicly, and it’s very cool because the whole goal there – it’s kind of an arms race between the security team and attackers – so you really want to be able to design new rules, new measurements and add new data sources quickly. And so, the ease of programming and the ease of collaborating with this team of dozens of people was super important.

We also have some really exciting health and life sciences applications, so some of these are actually starting to discover new drugs that companies can actually productionize to tackle new diseases, and this is all based on large scale genomics and statistical studies.

And there are a lot of more fun applications as well. Like actually the largest video game in the world, League of Legends, they use Databricks and Apache Spark to detect players that are misbehaving or to recommend items to people or things like that. These are all things that were featured at the conference.

JS: If you had one advice to developers and customers using Spark or Databricks, or guidance on what they should learn, what would that be?

MZ: It’s a good question. There are a lot of high-quality training materials online, so I would say definitely look at some of those for your use case and see what other people are doing in that area. The Spark Summit conference is also a good way to see videos and talks and we make all of those available for free, the goal of that is to help and grow the developer community. So, look for someone who is doing similar things and be inspired by that and kinda see what the best practices are around that, because you might see a lot of different options for how to get started and it can be hard to see what the right path is.

JS: One last question – in recent years there’s been a lot of fear, uncertainty and doubt about AI, and a lot of popular press. Now – how real are they, and what do you think people should be thinking?

MZ: That’s a good question. My personal view is – this sort of evil artificial general intelligence stuff – we are very far away from it. And basically, if you don’t believe that, I would say just try doing machine learning tutorials and see how these models break down – you get a sense for how difficult that is.

But there are some real challenges that will come from AI, so I think one of them is the same challenge as with all technology which is, automation – how quickly does it happen. Ultimately, after automation, people usually end up being better off, but it can definitely affect some industries in a pretty bad way and if there is no time for people to transition out, that can be a problem.

I think the other interesting problem there is always a discussion about is basically access to data, privacy, managing the data, algorithmic discrimination – so I think we are still figuring out how to handle that. Companies are doing their best, but there are also many unknowns as to how these techniques will do that. That’s why we’ll see better best practices or regulations and things like that.

JS: Well, thank you Matei, it’s simply amazing to see the innovations you have driven, and looking forward to more to come.

MZ: Thanks for having me.

“When you’re doing machine learning AI projects, it’s really important to be able to iterate quickly because it’s all about experimenting, about finding whether something will work, failing fast if a particular idea doesn’t work.


And I think the cloud makes it much easier.”

We hope you enjoyed this blog post. This being our first episode in the series, we are eager to hear your feedback, so please share your thoughts and ideas below.

The AI / ML Blog Team

Resources

Empowering agents with Office 365—Douglas Elliman boosts the agent experience – Microsoft 365 Blog

The Douglas Elliman logo.

Today’s post was written by Jeffrey Hummel, chief technology officer at Douglas Elliman.

When I began work at Douglas Elliman, I was attracted to the company’s heritage—more than 100 years of premier real estate sales experience delivers a cachet that’s a big part of our brand. I was also intrigued by the opportunities I saw to use IT to transform this historic company with new tools and services. We wanted to empower agents to be even more successful in an internet-based market where closing a deal often depends on how quickly an agent can respond to a customer’s IM. Although Douglas Elliman agents are independent contractors, they face the same challenges as any distributed sales force: how to stay productive while working away from the office. They need easy, highly secure access to their data and their colleagues. So, I made it a priority to empower our agents with Microsoft Office 365.

We view our more than 7,000 agents as our best asset and our competitive advantage. They are some of the most knowledgeable people in the industry—we operate in approximately 113 offices across New York City, Long Island, The Hamptons, Westchester, Connecticut, New Jersey, Florida, California, Colorado, and Massachusetts. And we are one of New York City’s top real estate firms ranked by agents with $10 million-plus listings. Yet, when I arrived two and a half years ago, many agents were worried about technology somehow replacing them. I reassured everyone in our sales offices that Douglas Elliman had a new mission: to improve, enhance, and elevate the agent experience. Today, we use Office 365 to show that we care for our agents more than anything else. And agents have gone from saying that IT kept them from working to their best ability to IT being the reason they now are.

We looked at other cloud platforms, but they did not reflect our core values. The tools we chose had to be easy to use, elegant, and efficient—and Office 365 meets all those requirements. Our agents range in age from 21 to 91. I love it when agents with decades of experience tell me, “Jeff, I just did my marketing report, and it took half the time! I was fully connected to all the data I needed online, and I had no trouble finding it.”

I’m most excited about how agents use these productivity tools to help more customers buy and sell more property. We are launching a new intranet, built on Microsoft SharePoint Online, which offers an agent app store where Office 365 will be front and center. Everyone will go there to access the tools they need to run their business and collaborate with their teams. Like many independent sales reps, each of our agents has unique work styles and demands. It’s a big benefit that we can offer customizable tools flexible enough for individual agents to choose how to run their business.

Some agents have already replaced Slack with Microsoft Teams. I consider Teams the greatest thing since the invention of the telephone. With so many options for collaboration all in one place, there’s something for everyone within a given group to improve virtual teamwork. Our top agents can have up to 10 people working for them in different offices. One agent has three members who create marketing materials and two others who do nothing but research commercial properties. They share everything using OneDrive cloud storage. Now we’re showing that agent the value of augmenting this process with Teams as a hub for teamwork where she can quickly access not only relevant materials but also all related communications among her team members. So, when they are talking to the next big client, they’ll have all the information they need in one place to help find a new storefront.

Personal productivity is way up, too. Another top agent who works with new development clients regularly juggles dozens of units at a time. He has to access enormous amounts of data, some of which is not in the public record. He used to store all the information accumulated from his work experience in 36 filing cabinets at the office. So, when a developer asked about zoning for a building site, for example, the agent had to call someone in the office to go and dig through the files. Not anymore. We scanned, categorized, and uploaded all his documents to OneDrive. Now he can get that information himself in less than a second from his mobile device. Using leading-edge tools, this highly successful agent has more time to build relationships with more developers, and his business is expanding.

Along with the launch of our new intranet, aptly named Douglas, we are going to introduce our AI chatbot, AskDouglas. This will start with some basic questions and answers and then evolve to be the go-to source for our agents to get questions answered about historical and relevant information within Douglas Elliman.

While we move our agents’ data to the cloud and introduce cloud-based business tools, we’re also improving our security posture and complying better with data privacy regulations. By using Microsoft security solutions that notify us when an agent’s account may be compromised, we can take proactive steps to thwart an attack, without the agent even knowing.

In two years, the company has changed the impact of IT through our mission to enhance and support our sales force. Today, we have agents raving to the executive team about the transformation they’ve seen in their technology tools and work styles. With the advantages of online collaboration and productivity services, plus real-time access to information, we recruit and retain top talent. Working with Office 365, we are strengthening our core advantage—the knowledge and experience of our agents—and putting it toward the next 100 years at Douglas Elliman.

—Jeffrey Hummel

A conversation with Microsoft CTO Kevin Scott – Microsoft Research

Kevin Scott

Chief Technology Officer Kevin Scott

Episode 36, August 8, 2018

Kevin Scott has embraced many roles over the course of his illustrious career in technology: software developer, engineering executive, researcher, angel investor, philanthropist, and now, Chief Technology Officer of Microsoft. But perhaps no role suits him so well – or has so fundamentally shaped all the others – as his self-described role of “all-around geek.”

Today, in a wide-ranging interview, Kevin shares his insights on both the history and the future of computing, talks about how his impulse to celebrate the extraordinary people “behind the tech” led to an eponymous non-profit organization and a podcast, and… reveals the superpower he got when he was in grad school.

Related:


Episode Transcript

Kevin Scott: It’s a super exciting time. And it’s certainly something that we are investing very heavily in right now at Microsoft, in the particular sense of like, how do we take the best of our development tools, the best of our platform technology, the best of our AI, and the best of our cloud, to let people build these solutions where it’s not as hard as it is right now?

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Kevin Scott has embraced many roles over the course of his illustrious career in technology: software developer, engineering executive, researcher, angel investor, philanthropist, and now, Chief Technology Officer of Microsoft. But perhaps no role suits him so well – or has so fundamentally shaped all the others – as his self-described role of “all-around geek.”

Today, in a wide-ranging interview, Kevin shares his insights on both the history and the future of computing, talks about how his impulse to celebrate the extraordinary people “behind the tech” led to an eponymous non-profit organization and a podcast, and… reveals the superpower he got when he was in grad school. That and much more on this episode of the Microsoft Research Podcast.

Host: Kevin Scott, welcome to the podcast today.

Kevin Scott: Well thank you so much for having me.

Host: So, you sit in a bit chair. I think our listeners would like to know what it’s like to be the Chief Technical Officer of Microsoft. How do you envision your role here, and what do you hope to accomplish in your time? I.E., what are the big questions you’re asking, the big problems you’re working on? What gets you up in the morning?

Kevin Scott: Well, there are tons of big problems. I guess the biggest, and the one that excites me the most and that prompted me to take the job in the first place, is I think technology is playing an increasingly important role in how the future of the world unfolds. And, you know, has an enormous impact in our day-to-day lives from the mundane to the profound. And I think having a responsible philosophy about how you build technology is like a very, very important thing for the technology industry to do. So, in addition to solving all of these, sort of, complicated problems of the “how” – what technology do we build and how do we build it? – there’s also sort of an “if” and a “why” that we need to be addressing as well.

Host: Drill in a little there. The “if” and the “why.” Those are two questions I love. Talk to me about how you envision that.

Kevin Scott: You know, I think one of the more furious debates that we all are increasingly having, and I think the debate itself and the intensity of the debate are good things, is sort of around AI and what impact is AI going to have on our future, and what’s the right way to build it, and what are a set of wrong ways to build it? And I think this is sort of a very important dialogue for us to be having, because, in general, I think AI will have a huge impact on our collective futures. I actually am a super optimistic person by nature, and I think the impact that it’s going to have is going to be absolutely, astoundingly positive and beneficial for humanity. But there’s also this other side of the debate, where…

Host: Well, I’m going to go there later. I’m going to ask you about that. So, we’ll talk a little bit about the dark side. But also, you know, I love the framework. I hear that over and over from researchers here at Microsoft Research that are optimistic and saying, and if there are issues, we want to get on the front end of them and start to drive and influence how those things can play out. So…

Kevin Scott: Yeah, absolutely. There’s a way to think about AI where it’s mostly about building a set of automation technologies that are a direct substitute for human labor, and you can use those tools and technologies to cause disruption. But AI probably is going to be more like the steam engine in the sense that the steam engine was also a direct substitute for human labor. And the people that benefited from it, initially were those who had the capital to build them, because they were incredibly expensive, and who had the expertise to design them and to operate and maintain them. And, eventually, the access to this technology fully democratized. And AI will eventually become that. Our role, as a technology company that is building things that empower individual and businesses, is to democratize access to the technology as quickly as possible and to do that in a safe, thoughtful, ethical way.

Host: Let’s talk about you for a second. You’ve described yourself as an engineering executive, an angel investor, and an all-around geek. Tell us how you came by each of those meta tags.

Kevin Scott: Yeah… The geek was the one that was sort of unavoidable. It felt to me, all my life, like I was a geek. I was this precociously curious child. Not in the sense of you know like playing Liszt piano concertos when I’m 5 years old or anything. No, I was the irritating flavor of precocious where I’m sticking metal objects into electric sockets and taking apart everything that could be taken apart in my mom’s house to try to figure out how things worked. And I’ve had just sort of weird, geeky, obsessive tastes in things my entire life. And I think a lot of everything else just sort of flows from me, at some point, fully embracing that geekiness, and wanting – I mean, so like angel investing for instance is me wanting to give back. It’s like I have benefited so much over the course of my career from folks investing in me when it wasn’t a sure bet at all that that was going to be a good return on their time. But like I’ve had mentors and people who just sort of looked at me, and, for reasons I don’t fully understand, have just been super generous with their time and their wisdom. And angel investing is less about an investment strategy and more about me wanting to encourage that next generation of entrepreneurs to go out and make something, and then trying to help them in whatever way that I can be successful and find the joy that there is in bringing completely new things into the world that are you know sort of non-obvious and -complicated.

Host: Mmmm. Speaking of complicated. One common theme I hear from tech researchers here on this podcast, at least the ones who have been around a while, is that things aren’t as easy as they used to be. They’re much more complex. And in fact, a person you just talked to, Anders Hejlsberg, recently said, “Code is getting bigger and bigger, but our brains are not getting bigger, and this is largely a brain exercise.”

Kevin Scott: Yes.

Host: So, you’ve been around a while. Talk about the increased complexity you’ve seen and how that’s impacted the lives and work of computer scientists and researchers all around.

Kevin Scott: I think interestingly enough, on the one hand, it is far more complicated now than it was, say, 25 years ago. But there’s a flipside to that where we also have a situation where individual engineers or small teams have unprecedented amounts of power in the sense that, through open-source software and cloud computing and the sophistication of the tools that they now use and the very high level of the abstractions that they have access to that they use to build systems and products, they can just do incredible things with far fewer resources and in far shorter spans of time than has ever been possible. It’s almost this balancing act. Like, on the other hand, it’s like, oh my god, the technology ecosystem, the amount of stuff that you have to understand if you are pushing on the state-of-the-art on one particular dimension, which is what we’re calling upon researchers to do all the time, it’s really just sort of a staggering amount of stuff. I think about how much reading I had to do when I was a PhD student, which seemed like a lot at the time. And I just sort of look at the volume of research that’s being produced in each individual field right now. The reading burden for PhD students right now must be unbelievable. And it’s sort of similar, you know, like, if you’re a beginning software engineer, like it’s a lot of stuff. So, it’s this weird dichotomy. I think it’s, perhaps if anything, the right trade off. Because if you want to go make something and you’re comfortable navigating this complexity, the tools that you have are just incredibly good. I could have done the engineering work at my first startup with far, far, far fewer resources, with less money, in a shorter amount of time, if I were building it now versus 2007. But I think that that tension that you have as a researcher or an engineer, like this dissatisfaction that you have with complexity and this impulse to simplicity, it’s exactly the right thing, because if you look at any scientific field, this is just how you make progress.

Host: Listen, I was just thinking, when I was in my master’s degree, I had to take a statistics class. And the guy who taught it was ancient. And he was mad that we didn’t have to do the math because computer programs could already do it. And he’s not wrong. It’s like, what if your computer breaks? Can you do this?

Kevin Scott: That is fascinating, because we have this… old fart computer scientist engineers like me, have this… like we bemoan a similar sort of thing all the time, which is, ahhh, these kids these days, they don’t know what it was like to load their computer program into a machine from a punch paper tape.

Host: Right?

Kevin Scott: And they don’t know what ferrite core memories are, and what misery that we had to endure to… It was fascinating and fun to, you know, learn all of that stuff, and I think you did get something out of it. Like it gave you this certain resilience and sort of fearlessness against these abstraction boundaries. Like you know, if something breaks, like you feel like you can go all the way down to the very lowest level and solve the problem. But it’s not like you want to do that stuff. Like all of that’s a pain in the ass. You can do so much more now than you could then because, to use your statistic professor’s phrase, because you don’t have to do all of the math.

(music plays)

Host: Your career in technology spans the spectrum including both academic research and engineering and leadership in industry. So, talk about the value of having experience in both spheres as it relates to your role now.

Kevin Scott: You know, the interesting thing about the research that I did is, I don’t know that it ever had a huge impact. The biggest thing that I ever did was this work on dynamic binary translation and the thing I’m proudest of is like I wrote a bunch of software that people still use, you know, to this day, to do research in this very arcane, dark alley of computer science. But what I do use all the time that is almost like a superpower that I think you get from being a researcher is being able to very quickly read and synthesize a bunch of super-complicated technical information. I believe it’s less about IQ and it’s more of the skill that you learn when you’re a graduate student trying to get yourself ramped up to mastery in a particular area. It’s just like, read, read, read, read, read. You know, I grew up in this relatively economically depressed part of rural, central Virginia, town of 250 people, neither of my parents went to college. We were poor when I grew up and no one around me was into computers. And like somehow or another, I got into this science and technology high school when I was a senior. And like I decided that I really, really, really wanted to be a computer science professor after that first year. And so, I went into my undergraduate program with this goal in mind. And so, I would sit down with things like the Journal of the ACM at the library, and convince, oh, like obviously computer science professors need to be able to read and understand this. And I would stare at papers in JACM, and I’m like, oh my god, I’m never, ever going to be good enough. This is impossible. But I just kept at it. And you know it got easier by the time that I was finishing my undergraduate degree. And by the time I was in my PhD program, I was very comfortably blasting through stacks of papers on a weekly basis. And then, you know, towards the end of my PhD program, you’re on the program committees for these things, and like not only are you blasting through stacks of papers, but you’re able to blast through things and understand them well enough that you can provide useful feedback for people who have submitted these things for publication. That is an awesome, awesome, like, super-valuable skill to have when you’re an engineering manager, or if you’re a CTO, or you’re anybody who’s like trying to think about where the future of technology is going. So, like every person who is working on their PhD or their master’s degree right now and like this is part of their training, don’t bemoan that you’re having to do it. You’re doing the computer science equivalent of learning how to play that Liszt piano concerto. You’re getting your 10,000 hours in, and like it’s going to be a great thing to have in your arsenal.

Host: Anymore, especially in a digitally-distracted age, being able to pay attention to dense academic papers and/or, you know, anything for a long period of time is a superpower!

Kevin Scott: It is. It really is. You aren’t going to accomplish anything great by you know integrating information in these little 2-minute chunks. I think pushing against the state-of-the-art, like you know creating something new, making something really valuable, requires an intense amount of concentration over long periods of time.

Host: So, you came to Microsoft after working at a few other companies, AdMob, Google, LinkedIn. Given your line of sight into the work that both Microsoft and other tech giants are doing, what kind of perspective do you have on Microsoft’s direction, both on the product and research side, and specifically in terms of strategy and the big bets that this company is making?

Kevin Scott: I think the big tech companies, in particular, are in this really interesting position, because you have both the opportunity and the responsibility to really push the frontier forward. The opportunity, in the sense that you already have a huge amount of scale to build on top of, and the responsibility that knowing that some of the new technologies are just going to require large amounts of resources and sort of patience. You know like one example that we’re working on here at Microsoft is we, the industry, have been worried about the end of Moore’s Law for a very long time now. And it looks like for sort of general purpose flavors of compute, we are pretty close to the wall right now. And so, there are two things that we’re doing at Microsoft right now that are trying to mitigate part of that. So, like one is quantum computing, which is a completely new away to try to build a computer and to write software. And we’ve made a ton of progress over the past several years. And our particular approach to building a quantum computer is really exciting, and it’s like this beautiful collaboration between mathematicians and physicists and quantum information theory folks and systems and programming language folks trained in computer science. But when, exactly, this is going to be like a commercially viable technology? I don’t know. But another thing that we’re you know pushing on, related to this Moore’s wall barrier, is doing machine learning where you’ve got large data sets that you’re fitting models to where you know sort of the underlying optimization algorithms that you’re using for DNNs or like all the way back to more prosaic things like logistic regression, boil down to like a bunch of sort of linear algebra. We are increasingly finding ways to solve these optimization problems in these embarrassingly parallel ways where you can use like special flavors of compute. And so like there’s just a bunch of super interesting work that everybody’s doing with this stuff right now, like, from Doug Burger’s Project Brainwave stuff here at Microsoft to… uh, so it’s a super exciting time I think to be a computer architect again where the magnitude and the potential payoffs of some of these problems are just like astronomically high, and like it takes me back to like the 80s and 90s, you know which were sort of the, maybe the halcyon days of high-performance computing and these like big monolithic supercomputers that we were building at the time. It feels a lot like that right now, where there’s just this palpable excitement about the progress that we’re making. Funny enough, I was having breakfast this morning with a friend of mine, and you know like both of us were saying, man, this is just a fantastic time in computing. You know, like on almost weekly basis, I encounter something where I’m like, man, this would be so fun to go do a PhD on.

Host: Yeah. And that’s a funny sentence right there.

Kevin Scott: Yeah, it’s a funny sentence. Yeah.

(music plays)

Host: Aside from your day job, you’re doing some interesting work in the non-profit space, particularly with an organization called Behind the Tech. Tell our listeners about that. What do you want to accomplish? What inspired you to go that direction?

Kevin Scott: Yeah, a couple of years ago, I was just looking around at all of the people that I work with who were doing truly amazing things, and I started thinking about how important role models are for both kids, who were trying to imagine a future for themselves, as well as professionals, like people who are already in the discipline who are trying to imagine what their next step ought to be. And it’s always nice to be able to put yourself in the shoes of someone you admire, and say, like, “Oh, I can imagine doing this. I can see myself in this you know in this career.” And I was like we just do a poorer job I think than we should on showing the faces and telling the stories of the people who have made these major contributions to the technology that powers our lives. And so that was sort of the impetus with behindthetech.org. So, I’m an amateur photographer. I started doing these portrait sessions with the people I know in computing who I knew had done impressive things. And then I hired someone to help you know sort of interview them and write a slice of their story so that you know if you wanted to go somewhere and get inspired about you know people who were making tech, you know, behindthetech.org is the place for you.

Host: So, you also have a brand-new podcast, yourself, called Behind the Tech. And you say that you look at the tech heroes who’ve made our modern world possible. I’ve only heard one, and I was super impressed. It’s really good. I encourage our listeners to go find Behind the Tech podcast. Tell us why a podcast on these tech heroes that are unsung, perhaps.

Kevin Scott: I have this impulse in general to try to celebrate the engineer. I’m just so fascinated with the work that people are doing or have done. Like, the first episode is with Anders Hejlsberg, who is a tech fellow at Microsoft, and who’s been building programing languages and development tools for his entire 35-year career. Earlier in his career, like, he wrote this programming language and compiler called Turbo Pascal. You know like I wrote my first real programs using the tools that Anders built. And like he’s gone on from Turbo Pascal to building Delphi, which was one of the first really nice integrated development environments for graphical user interfaces, and then at Microsoft, he was like the chief architect of the C# programming language. And like now, he’s building this programming language based on JavaScript called TypeScript that tries to solve some of the development-at-scale problems that JavaScript has. And that, to me, is like just fascinating. How did he start on this journey? Like, how has he been able to build these tools that so many people love? What drives him? Like I’m just intensely curious about that. And I just want to help share their story with the rest of the world.

Host: Do you have other guests that you’ve already recorded with or other guests lined up?

Kevin Scott: Yeah, we’ve got Alice Steinglass, who is the president of Code.org, who is doing really brilliantly things trying to help K-12 students learn computer science. And we’re going to talk with Andrew Ng in a few weeks, who is one of the titans of deep neural networks, machine learning and AI. We’re going to talk with Judy Estrin, who is former CTO of Cisco, a serial entrepreneur, board director at Disney and FedEx for a long time. And just you know one of the OGs of Silicon Valley. Yeah, so it’s you know like, it’s going to be a really good mix of folks.

Host: Yeah, well, it’s impressive.

Kevin Scott: All with fascinating stories.

Host: Yeah, and just having listened to the first one, I was – I mean, it was pretty geeky. I will be honest. There’s a lot of – it was like listening to the mechanics talking about car engines, and I know nothing, but it was…

Kevin Scott: Yeah, right?

Host: But it was fun.

Kevin Scott: That’s great. And like you know I hadn’t even thought about it before. But like if could be like the sort of computer science and engineering version of Car Talk, that would be awesome.

Host: You won first place at the William Campbell High School Talent Show in 1982 by appearing as a hologram downloaded from the future. Okay, maybe not for real. But an animated version of you did explain the idea of the Intelligent Edge to a group of animated high school hecklers. Assuming you won’t get heckled by our podcast audience, tell us how you feel like AI and machine learning research are informing and enabling the development of edge computing.

Kevin Scott: You know I think this is one of the more interesting emergent trends right now in computing. So, there are basically three things that are coming together at the same time. You know one thing is the growth of IoT, and just embedded computing in general. You can look at any number of estimates of where we’re likely to be, but we’re going to go from about 11 or 12 billion devices connected to the internet to about 20 billion over the next year and a half. But you think about these connected devices – and this is sort of the second trend – like they all are becoming much, much more capable. So, like, they’re coming online and like the silicon and compute power available in all of these devices is just growing at a very fast clip. And going back to this whole Moore’s Law thing that we were talking about, if you look at $2 and $3 microprocessor and microcontrollers, most of those things right now are built on two or three generations older process technologies. So, they are going to increase in power significantly over the coming years, like particularly this flavor of power that you need to run AI models, which is sort of the third trend. So, like you’ve got a huge number of devices being connected with more and more computer power and like the compute power is going to enable more and more intelligent software to be written using the sensor data that these devices are processing. And so like those three things together we’re calling the intelligent edge. And we’re entering this world where you’ll step into a room and like there are going to be dozens and dozens of computing devices in the room, and you’ll interface with them by voice and gesture and like a bunch of other sort of intangible factors where you won’t even be aware of them anymore. And so that implies a huge set of changes in the way that we write software. Like how do you build a user experience for these things? How do you deal with information security and data privacy in these environments? Just even programming these things is going to be fundamentally different. It’s a super exciting time. And it’s certainly something that we are investing very heavily in right now at Microsoft, in the particular sense of like, how do we take the best of our development tools, the best of our platform technology, the best of our AI, and the best of our cloud, to let people build these solutions where it’s not as hard as it is right now?

Host: Well, you know, everything you’ve said leads me into the question that I wanted to circle back on from the beginning of the interview, which is that the current focus on AI, machine learning, cloud computing, all of the things that are just like the hot core of Microsoft Research’s center – they have amazing potential to both benefit our society and also change the way we interact with things. Is there anything about what you’re seeing and what you’ve been describing that keeps you up at night? I mean, without putting too dark a cloud on it, what are your thoughts on that?

Kevin Scott: The number one thing is, I’m worried that we are actually underappreciating the positive benefit that some of these technologies can have, and are not investing as much as we could be, holistically, to make sure that they get into the hands of consumers in a way that benefits society more quickly. And so like just to give you an example of what I mean, we have healthcare costs right now that are growing faster than our gross domestic product. And I think the only way, in the limit, that you bend the shape of that healthcare cost growth curve, is through the intervention of some sort of technology. And like, week after week over the past 18 months, I’ve seen one technology after another that is AI-based where you sort of combine medical data or personal sensor data with this new regime of deep neural networks, and you’re able to solve these medical diagnostic problems at unbelievably low costs that are able to very early detect fairly serious conditions that people have when the conditions are cheaper and easier to treat and where you know the benefit to the patient, like they’re healthier in the limit. And so, I sort of see technology after technology in this vein that is really going to bring higher-quality medical care to everyone for cheaper and help us get ahead of these, you know sort of, significant diseases that folks have. And you know, there’s a similar trend in precision agriculture where, in terms of crop yields and minimizing environmental impacts, particularly in the developing world where you still have large portions of the world’s population sort of trapped in this you know sort of agricultural subsistence dynamic, AI could fundamentally change you know the way that we’re all living our lives, all the way from you know like all of us getting like you know sort of cheaper, better, locally-grown organic produce with smaller environmental impact, to you know like how does a subsistence farmer in India dramatically increase their crop yield so that they can elevate the economic status of their entire family and community?

Host: So, as we wrap up, Kevin, what advice would you give to emerging researchers or budding technologists in our audience, as many of them are contemplating what they’re going to do next?

Kevin Scott: Well, I think congratulations is in order to most folks, because this is like just about as good a time I think as has ever been for someone to pursue a career in computer science research, or to become an engineer. I mean, the advice that I would give to folks is like, just look for ways to maximize the impact of what you’re doing and so like I think with research, it’s sort of the same advice that I would give to folks starting a company, or engineers thinking about the next thing that they should go off and build in the context of a company: find a trend that is really a fast growth driver, like the amount of available AI training compute, or the amount of data being produced by the world in general, or by some particular you know subcomponent of our digital world. Just pick a growth driver like that and try to you know attempt something that is either buoyed by that growth driver or that is directly in the growth loop. Because I think those are the opportunities that tend to have both the most head room in terms of you know like if there are lots of people working on a particular problem, it’s great if the space that you’re working in, the problem itself, has a gigantic potential upside. Those things will usually like accommodate lots and lots and lots of sort of simultaneously activity on them and not be a winner-takes-all or a winner-takes-most dynamic. You know and there are also sort of the interesting problems as well. You know it’s sort of thrilling to be on a rocket ship in general.

Host: Kevin Scott. Thanks for taking time out of your super busy life to chat with us.

Kevin Scott: You are very welcome. Thank you so much for having me on. It was a pleasure.

Host: To learn more about Kevin Scott, and Microsoft’s vision for the future of computing, visit microsoft.com/research.

ADA Anniversary: The Continued Importance of Inclusion

By Jenny Lay-Flurrie, Chief Accessibility Officer

On July 26 we will celebrate the 28th anniversary of the Americans with Disabilities Act (ADA). The ADA stands as one of the most important pieces of civil rights legislation and prohibits discrimination while ensuring that people with disabilities have the same opportunities and rights as people without disabilities. It serves as a reminder of both where we have come from as well as the work left to be done.

Since its inception, the ADA has helped break down barriers for people with disabilities in built environments, provision of government services, communications, and employment. Despite a lot of great progress, after nearly 3 decades there is still much to be done, not only to level the playing field, but also to recognize (and seek out!) talented people with disabilities, skills and expertise that we need in our companies. The unemployment rate for people with disabilities hasn’t materially shifted in that time and remains nearly double that of people without disabilities. We are one of the many employers that has the power to influence that number. We take that responsibility seriously. Here are three things we are doing to drive it:

Breaking Down Barriers Through Technology
It’s never been more important to have a diverse and inclusive workforce including people with disabilities. Put simply, it helps us create better products that empower people with disabilities. When accessibility is done well, it becomes invaluable to daily life, the workplace, and play. It’s ubiquitous and easy to use. These values guide us, and I urge you to check out the following:

  • Accessibility built in by design. There is a wealth of goodness built into the core of our products – from Windows to Office and Xbox. Learning Tools, Dictate, Narrator, Translator, Color Blindness Filters, and more. We’ve created a simple one-stop-shop with our Accessibility Feature Sway which has every feature broken down by disability type and we update this and our new website www.microsoft.com/accessibility as new features become available. Do check it out and share!
  • If in doubt, ask. Remember we have a dedicated support team for people with disabilities using Microsoft products, or using accessibility features. The Disability Answer Desk is there 24×7, via chat, phone and in the USA, a dedicated ASL video line. Now in 11 markets and ready to help you get going with your technology
  • Your feedback is gold dust. We want to know what future you want, and technology you want to empower you. Tell us via our Accessibility UserVoice, Disability Answer Desk or tweet @MSFTEnable. Your feedback powers us.
  • The power of innovation. AI is opening doors for innovation for people with disabilities. Invaluable tools like Seeing AI, Microsoft Translator, and Helpicto are built leveraging our vision, knowledge and speech Cognitive Services APIs and so we were excited earlier this year to announce the AI for Accessibility program to open up these technologies to you to create. The application process is now open, and first batch of grant applications are in mid review. Literally can’t wait to see what you come up with!

Creating Forums for Inclusion
It isn’t enough to just talk about inclusion, we need to partner together to drive impact. There are many events we host and attend where this happens, but two have highlighted the appetite for more:

  • Microsoft Ability Summit. For the first time ever, we opened the doors to this internal event to the public and we were humbled by the results. Over the 8 years since we started the Ability Summit, attendance has grown from just 80 people in that first year, to 1,200 Microsoft employees, and 1,200 external guests over the two days. At the event, we demonstrated the latest in accessible technologies and attendees connected with the owners and drivers of those technologies. They also had the opportunity to engage with over 20 companies at an inclusive hiring job fair and heard from our very own CVP of Retail Stores, a panel of dignitaries and CEO Satya Nadella, who shared their thoughts on accessibility and disability. We were honored to include former Senator Tom Harkin who introduced the ADA into the Senate back in 1989 and underscored the need to break down barriers to get people with disabilities into the workforce. It’s our hope that by opening the event up more broadly we can share knowledge and accelerate the process for all organizations to build their programs, hire amazing talent, and reduce the unemployment rate.
A group of people sitting on stage

Creating a Region of Inclusion panel discussion at the 2018 Microsoft Ability Summit.

  • Disability:IN. Just last week in Las Vegas, 1,500 folks from over 160 corporate partners came together to discuss, share and take action on disability inclusion. Disability:IN (previously known as USBLN) is a corporate based NGO, Microsoft is a proud sponsor, and I’m a honoured to be chair of the board of directors. This organization has grown in numbers and strength in the past years and it speaks to the need, appetite and desire from so many companies to not only understand but drive the future of disability inclusion. During this event, over 130 rising leaders met with company leaders and many walked away with jobs and intern positions. We celebrated those that have achieved high scores on the Disability Employment Initiative (DEI), with many achieving 100% including ourselves. Also, technology was a HOT topic, and we dedicated one of the opening plenaries to showing and sharing the latest in accessible inclusive technology – and I had a blast showing Office 365, PowerPoint, Translator, PowerPoint Designer, Auto Alt-Text, Seeing AI and Xbox Adaptive Controller live on stage. It was clear from the room, amazing speakers and companies sharing their journeys, that this is a priority across corporate America, and how we partner together has never been more important.

Supporting Inclusion in Action
Perhaps one of the best examples of making inclusion real is the Special Olympics. This year Microsoft was proud to be the Presenting Sponsor of the 2018 Special Olympics USA Games here in Seattle. With the theme “Rise with Us,” athletes challenged Seattle to make the 2018 games the most inclusive Special Olympics to date and honourary Chair Brad Smith, set the tone – asking Seattle to create a legacy of inclusion that lasts long after the games finish. As part of the event a job fair was held for athletes that included 16 companies including Microsoft. With 4,000 athletes and more than 12,000 volunteers (including 2,000+ Microsoft employees!) participating in the event, we are creating a legacy of inclusion in the region and a galvanizing force epitomized by local athlete Frannie Ronan – the youngest athlete in the games at just 8 years old who inspired us all at the opening ceremony and walked out with 2 silvers and 2 bronzes and a very big smile.

A big crowd of people in a field.

Opening ceremonies of the 2018 Special Olympics USA Games in Seattle

A woman and a child smiling

Jenny Lay-Flurrie with Frannie Ronan at the 2018 Special Olympics USA Games in Seattle

In addition to celebrating the ADA, we recognize individuals and organizations all over the world are developing disability rights policies and programs under the United Nation Convention on the Rights of People with Disabilities and helping their communities raise awareness of the importance of accessibility and need for an inclusive culture. To make real progress, it will take collaboration from across government, industry, employers and individuals with disabilities to realize the vision of the ADA and reduce the unemployment rate for people with disabilities everywhere.

In the meantime, do explore what technology can do for you through the power of accessibility, keep us grounded in what you want to see going forward, and get involved in forums and supporting these incredible organisations that are going to power the future of disability inclusion.

Getting back to nature with AI: Why Microsoft and National Geographic Society are working together to advance conservation science with computer science – Microsoft on the Issues

Camper sitting at creek with night sky above
Photograph by Devlin Gandy/National Geographic

By Dr. Jonathan Baillie, chief scientist at National Geographic Society, and Dr. Lucas Joppa, chief environmental scientist at Microsoft

Yesterday, Microsoft and National Geographic Society announced a new, joint grant program that will equip explorers working on the most challenging environmental issues of the 21st century with the most advanced technologies available today. “AI for Earth Innovation” grants will fund new solutions that leverage AI and cloud technologies to monitor, model and manage Earth’s natural resources. Application forms are available today, here, to anyone working at the intersection of computer science and environmental science, especially in the areas of agriculture, biodiversity, climate change and water.

As scientists that have spent our entire respective careers focused on conservation, we’ve come to believe that increased adoption of technology, including AI, is critical to make the progress needed – and at the pace needed – to protect our planet. From producing the foundational estimates on how rapidly species are going extinct to determining the effectiveness of our current conservation efforts, we realized that progress was slow or impossible without deploying scalable technology solutions.

There have been some notable success stories – including those we featured in a book we jointly published on the role of protected areas in conservation. But they are, frustratingly, the exception to the rule.

Now, in our roles as chief scientists at global organizations (for science and exploration and innovative technology, respectively), we hope to solve the root cause of that frustration. That is the goal of this partnership, and why Microsoft and National Geographic Society are bringing $1 million and access to our experts and technology together in this new partnership.

While different, both organizations are focused on pushing the boundaries of science and exploration for the benefit of the planet. National Geographic is synonymous with science and exploration. For 130 years, the organization has opened minds to the natural world, challenged perceptions of what is possible and set the stage for future exploration. For more than 35 years, Microsoft, too, has explored and pushed forward the boundaries of what technology can do, and what it can do for people and the world.

Our organizations have a unique combination of expertise in conservation and computer science, capacity building and public engagement, providing incredible potential to drive fundamental change. We will work together to empower people everywhere to respond to some of the most challenging environmental issues of the 21st century.

We realize that to some, it may seem counterintuitive to try to protect the planet with technology. It’s true that past industrial revolutions and technology development has directly contributed to our current climate crisis. Certainly, we recognize it’s not a panacea. But we’re fundamentally optimistic, because over the course of human history, every solution to a major societal challenge has been the result of human ingenuity and new technologies. It’s been the combination of scientific exploration and technological advances that has fueled new discoveries and led to major breakthroughs in our understanding of the planet and life on Earth. It’s as true today as it was when National Geographic Explorer Bob Ballard discovered new forms of life at the bottom of the ocean using then-cutting edge underwater remotely operated vehicle (ROV) technology.

Lately, innovation in technology has far outpaced anything imaginable before, but scientific knowledge isn’t keeping pace. We have often imagined a future where that is no longer the case, and our individual organizations have worked tirelessly to change this, too.

By partnering, we’re ready to move from imagining to enabling. With AI and the cloud, researchers can stay focused on new discoveries, rather than data collection and sorting. Their findings can more easily be shared with other researchers around the world, creating new economies of scale that accelerate and improve the state of conservation science in near-real time.

While there are only a handful of grants, the program is structured to provide exponential impact. By ensuring that all models supported through this grant follow an open source approach and are publicly available, we will allow environmental researchers and innovators around the globe to take advantage of these new innovations immediately and directly in their own vital work.

For the health of our planet and our future, we all need to get back to nature with the help of technology. Microsoft and National Geographic are ready to put our tools and skills to work for researchers working to make that more sustainable future a reality. Come join us!

Tags: ,

CMS creates chief health informatics officer position

The Centers for Medicare and Medicaid Services created a chief health informatics officer position geared toward driving health IT strategy development and technology innovation for the department.

According to the job description, the chief health informatics officer (CHIO) will be charged with developing “requirements and content for health-related information technology, with an initial focus on improving innovation and interoperability.”

The chief health informatics officer position will develop a health IT and information strategy for CMS and the U.S. Department of Health and Human Services, as well as provide subject-matter expertise for health IT information management and technology innovation policy.

Applying health informatics to IT

The position also entails working with providers and vendors to determine how CMS will apply health informatics methods to IT, as well as acting as a liaison between CMS and private industry to lead innovation, according to the job description.

A candidate must have at least one year of “qualifying specialized experience,” including experience using health informatics data to examine, analyze and develop policy and program operations in healthcare programs; offering guidance on program planning to senior management for an organization; and supervising subordinate staff.

Pamela Dixon, co-founder and managing partner of healthcare executive search firm SSi-SEARCH, based in Atlanta, said a chief health informatics officer must have all the skill sets of a chief medical information officer and more. Dixon said a CHIO must be a strategic systems thinker, with the ability to innovate, a strong communicator and a “true leader.”

“The role could and should unlock the key to moving technology initiatives through healthcare dramatically faster, dramatically more effective,” Dixon said.

Finding the right balance

The role could and should unlock the key to moving technology initiatives through healthcare dramatically faster, dramatically more effective.
Pamela Dixonco-founder and managing partner, SSi-SEARCH

Eric Poon, who has served as Duke University Health System’s chief health information officer for the last three and a half years, said a successful informatics professional enables individuals within an organization to achieve quality improvement and patient safety goals with technology. Poon oversees clinical systems and analytics teams and ensures data that’s been gathered can be used to support quality initiatives and research.

One of the most significant challenges Poon said he faces is determining how to balance resources between the day-to-day and “what’s new,” along with making data accessible in a “high-quality way,” so faculty and researchers can easily access the data to support their work in quality improvement and clinical research. Being successful means creating a bridge between technology and individuals within the organization, Poon said.

“I would like them to say that we are making it possible for them to push the envelope with regards to data science and research and data exchange,” Poon said. “I also like to think we will have innovators who are coming up with new apps, new data science, machine learning algorithms that are realigning how we engage patients and how we are really becoming smart about how to use IT to move the needle in quality and safety … and patient health in a cost-effective way.”

Emerging roles important for change

Dixon said new and emerging leadership roles are important because they make organizations think about both what they need or want the individual to accomplish, as well as what the organization itself could accomplish with the right person.

“The actual title is less important,” she said. “There are CHIOs that might just as easily carry the title chief innovation officer or chief transformation officer or chief data officer, depending on their focus. The important thing is that we encourage and foster growth, value and innovation by creating roles that are aimed at doing just that.”

The creation of a chief health informatics officer position and the push to focus on health IT within CMS is part of a larger initiative started earlier this year after the Donald Trump administration announced MyHealthEData, which allows patients to take control of their healthcare data and allows CMS to follow them on their healthcare journey.

Johnathan Monroe, director of the CMS media relations group, said the organization will be accepting applications for the chief health informatics officer position until July 20.

Taking Pride in being an ally for the LGBT community

By Cindy Rose, Chief Executive of Microsoft UK

This year’s London Pride Festival will be held on Saturday, July 7. I see this annual event as a great opportunity to celebrate the diversity and inclusion that makes the capital – and this country – such a great place to live and work.

Hundreds of Microsoft staff will take part in London Pride, with thousands more joining similar celebrations across the world, including Cambridge (August 11), Manchester (August 25) and Reading (September 1). These can be a beacon of hope, and I have loved reading about our employees in Scotland, North America, Brazil, Japan, Poland and elsewhere joining Pride events to embrace who they are as they do what they love.

This is why Pride and being an LGBT Ally is important for everyone at Microsoft.

Many of those taking part are members of GLEAM, Microsoft’s lesbian, gay, bisexual and transgender employee resource group, which has a strong following globally. Microsoft has a long history of diversity and inclusion that continues to this day, and I believe it is one of our strongest assets. In 1993, our company was one of the first in the world to offer employee benefits to same-sex domestic partners; while last year we hosted our first LGBT leadership conference, in Ireland, featuring leaders from more than 20 countries.

This year I want to do more for our LGBT staff, partners and customers. I am delighted to announce that Microsoft Rewards users can now turn the points they earn into cash and donate it to Stonewall, an LGBT equality charity based in the UK.

Stonewall has been supporting the LGBT community for 29 years, working to transform institutions and change hearts, minds and laws so people can feel free to be themselves. I am proud that Microsoft is helping people support this cause to change lives for the better.

Find out more about Microsoft Rewards

To get involved, sign up for a Microsoft Rewards account and earn points by using the Bing search engine, completing online quizzes and buying certain products via the Microsoft Store. You will then be able to give these points to Stonewall in the form of cash.

Microsoft and Stonewall share a mission: to empower individuals. Whether it’s empowering people to achieve more or make change happen, the goal is the same – to help everyone be the best they can be.

Tags: , , , , ,

Transformational leadership needed to pursue data-driven ethos

Brendan Aldrich, chief data officer at California State University, isn’t just a data governance or data quality leader. He also leads the university system’s business intelligence and data warehousing programs. And he’s a prime example of how the role of the chief data officer is changing.

According to Gartner’s most recent survey of high-ranking data professionals, 85% of respondents said they are defining the organization’s data and analytics strategy, tackling responsibilities such as data management and even data science.

Prior to being hired by CSU, Aldrich was the chief data officer at Ivy Tech, where he transformed how Indiana’s community college system accessed and interacted with data. There, he helped spearhead data-driven initiatives such as Project Early Success, which uses data to find students at risk of failing their courses early enough to intervene. Now as the chief data officer at the largest four-year education system in the country, he’s hoping to pursue a similar mission and develop data-driven initiatives that help students succeed.

A featured speaker at the Real Business Intelligence conference, Aldrich sat down with SearchCIO to talk about what transformational leadership is and how he’s found ways to bring about change.

Editor’s note: This interview was edited for brevity and clarity.

At the Real Business Intelligence conference, you’re talking about transformational leadership. Can you define what that phrase means and why we need that kind of leadership today?

Brendan Aldrich, CDO, California State UniversityBrendan Aldrich

Brendan Aldrich: Transformational leadership is about being a game changer — being that person who, rather than do your job, is going to redesign your job both for the benefit of your organization as well as, potentially, for your industry.

There are three things that I tend to say: We need to make data intuitive, relevant and interactive. As I’ve grown in my career, I’ve been in IT 20 some-odd years, I began to realize that for much of the last 30 to 35 years, across all industries, we’re using data the same way today that we did 35 years ago, which seems silly because in so many areas our technologies are advancing and evolving.

In my work, especially in the last six years in higher education, what I’ve been focused on is how do we change the rules of the game? How can we start to leverage advances in technologies to be able to advance our capabilities around information? How do we do things that are better than what we’ve done for 35 years — and do them less expensively and with less people to either build or support it?

A lot of that comes down to asking some initial questions: Why am I doing what I’m doing? Is there a better way to do this? How do we start to address some of these challenges with data that we’ve accepted for 30 years rather than solved?

There’s so much to unpack in that response, specifically this idea of transforming how companies use data. You’ve been a champion of making data available to the masses at Ivy Tech and now at CSU. Why is this so critical?

Aldrich: The three words I mentioned earlier — intuitive, relevant and interactive. If you can do those three things well, then you can open up data to thousands of employees across your organization for them to use intelligently and accurately to test an idea or prove out a theory, which could revolutionize the way your organization operates.

Your job description, which includes developing a big data strategy and building a data lake, is a pretty tall order. Where do you get started when taking on such an expansive role?

Aldrich: As a chief data officer, I often say that my job comes in two phases. Phase one is getting my arms around what exists and what we’re doing today. The tools I need to help the organization capitalize on data typically don’t exist when I first arrive. So the first phase of my job is figuring out what we have and what tools we need so that we can start to bring our data together to capitalize on it. That’s what I’m doing now.

What is such an interesting challenge here at CSU, because we have 23 independent campuses, is my approach has to not only ensure statewide consistency on a variety of metrics, but it also has to support and enhance local campus diversity — the ability for individual campuses to iterate on this data, to create new measures and dimensions that maybe aren’t in use at the chancellor’s office but are critical to a campus.

Once we do that, and that’s usually the first year of the role, then I switch to becoming much more consultative. I’ll be spending time with different teams — the registrars, the advisers, the faculty members — and asking what they need.

At Ivy Tech, that’s where some of the initiatives such as Project Early Success came from. With 77% accuracy, Project Early Success predicted which students were likely to fail which courses and why in the first two weeks of the term using behavior-based models.

Let’s dig into Project Early Success a little. How did it work and what did you learn?

Aldrich: Project Early Success utilized technology and data to find students who were just starting to struggle early enough that you could intervene with the right piece of advice and change that trajectory.

When we did Project Early Success at Ivy Tech, we captured a lot of information. The first term we did it, we had about 60,000 enrolled students, and we predicted 16,247 of them were likely to fail one or more classes.

During weeks three and four of the term, I helped to coordinate over 100 faculty, staff and administrators to make 20,053 phone calls to those students. We eventually reached 31.5% of them, and we captured information about those conversations so that we could study them later.

In 11 cases, we found the reason the student’s behavior changed was because their heat had been turned off. Now, when your heat is turned off, you don’t call your college. You might call your parents, you might call your friends, you might sell your television set, but you don’t call your school.

As it turned out, Ivy Tech had an emergency funds program to do things like help students get their heat turned back on so that they could focus on studying. And I think a lot of colleges in this country have a whole range of services for students that they don’t know about. Being able to use data is sometimes just a matter of helping to find students who need those programs and make that connection between the two.

I think for us here at CSU, as we get our data together and we start putting these platforms into place, that will be one of our first focuses: how we help ensure that our students are able to succeed.

What do you see as the biggest data challenge at CSU and how are you planning to tackle it?

Aldrich: This is not just CSU, it’s not even higher education, but I’d say the biggest problem facing companies today when it comes to data is addressing the cult of opinion. There are things that sound logical and reasonable, so we all believe them. When we started to do Project Early Success at Ivy Tech, one of the things we heard over and over again is that students don’t want us to call them, that we call them too much, that they hate hearing from us.

So what we did is when we called students, every caller filled out a form about the call. One of the things we asked callers to measure was did the student seem happy to hear from you. Overwhelmingly, responses to the calls were in the neutral to very positive range — almost 98% of the data captured.

In fact, that very first term we kept hearing over and over, ‘I can’t believe Ivy Tech called me. I mean, Ivy Tech is such a large organization, but yet they called me to see how my term started.’ That’s the kind of power that, when you’re utilizing data correctly, can make a difference.

Why healthcare APIs will save lives, money and time

Stan Huff, M.D., chief medical informatics officer at Intermountain Healthcare, made a strong case for medical platform interoperability, healthcare APIs and an open source approach to health IT at an Object Management Group conference in Boston on Monday.

IT departments in most industries have long been enthusiastic users of APIs – standard software building blocks that make development and interoperability easier. They’ve also participated in the open source movement, where code is shared, reused and improved upon for the common good.

In the health IT space, however, these concepts have been slower to gain traction, and Apple just recently became the first company to open up its Health Record API to developers. Apple’s API is based on the Fast Healthcare Interoperability Resources (FHIR) standard, which was created in 2014 and is arguably the most talked about of the healthcare APIs today.

But being talked about is a long way from being implemented, and that, Huff was quick to stress, is the major problem. “FHIR is really easy to implement,” he told a room full of physicians. “It’s had unprecedented support from EHR companies. But it’s young still. We have a vision, but we just need to get there.”

What healthcare APIs can jump-start

The vision is a world where any electronic health record system could communicate with any other system; data could be gathered and mined; and, ultimately, decision engines could be built that could improve patient care, cut costs, reduce medical errors and even help with doctor burnout, Huff said.

To make his point, he shared some stark data. Citing a Johns Hopkins study, Huff said approximately 250,000 people die a year due to medical error, making errors the third-leading cause of death in the U.S. That is five times the amount of people who die in auto accidents, he said.

And then there is the issue of cost, because each EHR system at each hospital needs unique applications created for it. “That’s like saying we need 50 different versions of Yelp for each hospital,” he said. “Our architecture is wrong. It’s set up so that we can’t share what we’ve created. We’re paying an incredible price for software. Each useful app is created or re-created on each platform, and we pay for it.”

To be more specific, Huff said Intermountain, based in Murray, Utah, has developed 150 clinical decision support engines that offer best practices and advice on everything from diabetes to heart health.

“But that 150 really represents the low-hanging fruit,” he said. “We need 5,000 rules or modules, and there’s no scalable path to get there. And there’s no scalable path to pass that information on to community hospitals.”

If we do this right, we could save 100,000 lives a year.
Stan HuffM.D., chief medical informatics officer, Intermountain Healthcare

One large hospital was able to develop 13 of these decision engines in six months, but they’re specific to the EHR in use and would be “cost-prohibitive” to share with other hospitals.

To get started with healthcare APIs and down the path of interoperability, Huff said IT professionals need to ask themselves three questions:

  • What data should be collected?
  • How should the data be modeled?
  • And what does the data mean?

Asking, and then answering, those questions will kick-start the interoperability journey, help decide on the correct healthcare APIs and eventually will lead to sweeping changes in medicine.

“If we do this right, we could save 100,000 lives a year,” he said. “We could go from being right 50% of the time to 80% of the time. And we could get new EHR systems for millions, rather than billions.”