Tag Archives: Program

Microsoft expands commitment to military spouse community – Microsoft Military Affairs

Today in San Francisco, Microsoft Military Affairs will join our partners from LinkedIn to each share new commitments to the military spouse community.

Military spouses are an integral supporting force for members of our military, but face staggering 18 percent unemployment and 53 percent underemployment due to moves every two to three years, according to a 2016 study from Blue Star Families on the social cost of unemployment and underemployment of military spouses.

As part of our commitment to the military spouse community, Microsoft will launch a pilot program to provide spouses with technology skills training beginning in September.

Microsoft has successfully opened a technology career pipeline for transitioning service members and veterans via the Microsoft Software & Systems Academy (MSSA) program, which has expanded coast-to-coast and has a graduation rate of over 90 percent. We are excited to explore how to expand and tailor these opportunities to military spouses, which represent a diverse talent pool that is adaptable, resilient and highly educated and ready to take on new and exciting opportunities to further their professional and personal goals.

The U.S. government estimates information technology occupations are projected to grow 12 percent from 2014 to 2024, faster than the average for all occupations. Because there are 500,000 open technology jobs annually, we know that career programs are needed to help close the technology skills gap.

“Microsoft is excited to work with technology leaders and other organizations committed to supporting military spouses, and to find avenues that lead to meaningful career opportunities for active duty military spouses,” said U.S. Marine Corps Major General (Ret.) Chris Cortez, Vice President of Microsoft Military Affairs.

LinkedIn also announced today that it is expanding its military and veterans program to include military spouses through a new partnership with the U.S. Department of Defense’s Spouse Education and Career Opportunities program. Beginning this July, LinkedIn will provide one year of LinkedIn Premium to every military spouse during each of their moves to new installations to facilitate their career transitions, and once again upon conclusion of military service. This will include free access to LinkedIn’s online library of more than 12,000 LinkedIn Learning courses, including its newly-launched learning path designed to help military spouses succeed in flexible, freelance or remote-work opportunities.

The Microsoft Military Affairs team is working closely with military spouses and nonprofit organizations to understand firsthand the unique challenges this community faces as we build out and learn from our pilot program.

We are thrilled to begin our pilot program in the fall and to continue our support of military spouses and their community by providing the skills they need to enter technology careers.

Wild Me joins AI for Earth | Stories

A new investment from Microsoft’s AI for Earth program will accelerate Wild Me, an organization that identifies and tracks individual animals using machine learning and computer vision

REDMOND, Wash. — June 14, 2018 — On Thursday, Microsoft Corp. announced that Wild Me, a Portland-based nonprofit organization that focuses on combatting extinction with citizen science and artificial intelligence, will become a new featured project in its AI for Earth program. This deeper level of investment and engagement will enable Wild Me, and its wide range of users and supporters, to more effectively and efficiently use software and AI to combat extinction.

“The world is facing a major biodiversity crisis, and Wild Me’s work in harnessing computer vision and machine learning to monitor and track individual animals is truly groundbreaking,” said Bonnie Lei, AI for Earth project manager at Microsoft. “Microsoft hopes to accelerate Wild Me’s conservation impact by enabling wider usage of its open source algorithms through making them available on Microsoft Azure as APIs, and boosting the speed and accuracy of its entire Wildbook platform by migrating it over to Azure.”

Wildbook is an open source, cloud-based software platform — created by Wild Me in collaboration with faculty and students at Princeton University, Rensselaer Polytechnic Institute and the University of Illinois-Chicago — that brings together AI, computer vision, scientific research and citizen science to help protect endangered species. Using images uploaded from conservationists, researchers and citizen scientists, the software helps identify and track animal populations, monitor their migrations and interactions, and evaluate threats to inform and improve conservation efforts.

“Wildbook democratizes science and conservation,” said Tanya Berger-Wolf, director at Wild Me and professor at University of Illinois-Chicago. “The partnership with Microsoft will allow us to enable science and conservation at planetary scale and high resolution over time, space and individual animals.”

Wild Me will be the fifth AI for Earth featured project, joining land cover mapping, Project Premonition, FarmBeats and iNaturalist. With 111 grantees in 27 countries, AI for Earth puts Microsoft’s cloud and AI tools in the hands of those working to solve global environmental challenges. Through grants that provide access to cloud and AI tools, opportunities for education and training on AI, and investments in innovative, scalable solutions, AI for Earth works to advance sustainability across the globe.

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,

rrt@we-worldwide.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

New Arista switches use Barefoot Tofino programmable chip

Arista has launched a family of switches that companies can program to perform tasks typically handled by network appliances and routers. The company claims the consolidation capabilities of the new 7170 series reduces costs and network complexity.

The programmability of the 7170 family stems from the Barefoot Networks Tofino packet processor found in the hardware. Engineers program the silicon using P4, an open source language.

Barefoot markets Tofino as an alternative to fixed-function application-specific integrated circuits. Large enterprises, cloud and communication service providers are typical users of the high-speed Barefoot Tofino chip, which processes packets at 6.5 Tbps.

Arista, which uses Broadcom and Cavium packet processors in other switches, wants to broaden the potential customer base for the Barefoot Tofino chip by coupling it with the vendor’s EOS network operating system for leaf-spine architectures. To make programming on Barefoot Tofino silicon easier, Arista provides packaged profiles that contain data plane and control plane features for specific applications. Network managers can customize the patterns using P4 and deploy them on EOS.

“We’ll have to see what sort of benefits customers derive from using the [7170] technology in real-world production environments,” said Brad Casemore, an analyst at IDC. “In theory, it certainly has the potential to handle some tasks typically addressed by routers and middleboxes.” 

Arista application profiles

Examples of the applications defined in the Arista profiles include network overlays and virtualization to offload network functions, such as traffic segmentation or tunnel encapsulation from virtual servers.

Other profiles provide network and application telemetry for flow-level visibility, configurable thresholds and alarms, timestamping and end-to-end latency. Arista also offers patterns supporting some firewall functionality and large-scale network address translation. NAT is a way to manage multiple IP addresses by giving them a solitary public IP address. The methodology improves security and decreases the number of IP addresses an organization needs.

“How readily those profiles are embraced and productively employed could determine the extent to which the 7170 successfully addresses the use cases Arista has identified,” Casemore said.

The 7170 series has two models. The first is a 1RU chassis that supports 32, 64 or 128 ports at 40/100 GbE, 50 GbE and 10/20 GbE, respectively. The second is a 2RU system that supports 64, 128 or 256 interfaces at 40/100 GbE, 50 GbE and 10/25 GbE, respectively. The hardware processes up to 12.8 terabits per second.

Base pricing for a 64-port system is $1,200 per port.

In March, Arista introduced two 25/100 GbE switches for cloud providers, tier-one and tier-two service providers, high-tech companies and financial institutions ready to replace 40/100 GbE switches with more powerful systems.

Arista is targeting the two switches — the 7050X3 and the 7260X3 — at different use cases. The former is an enterprise or carrier top-of-rack switch, while the 7260X3 is for leaf-spine data center networks used in large cloud environments.

Neural Program Synthesis and the Quest to Democratize Programming with Dr. Rishabh Singh – Microsoft Research

Dr. Rishabh Singh – Researcher

Episode 10, January 31, 2018

Neural Program Synthesis and the Quest to Democratize Programming with Dr. Rishabh Singh

Every day, computers take on more and more of our daily tasks. Fill in a few cells on your spreadsheet? It’ll fill in the rest. Ask your car for directions? It’ll get you there. We can program computers to do almost anything. But what about programming computers to… program computers? That’s a task that Dr. Rishabh Singh, and the team in the Cognition group at Microsoft Research, are tackling with Neural Program Synthesis, also known as artificial programming.

Today, Dr. Singh explains how deep neural networks are already training computers to do things like take classes and grade assignments, shares how programmers can perform complicated, high-level debugging through the delightfully named process of neural fuzzing, and lays out his vision to democratize computer programming in the brave new world of Software 2.0.

Related:


Transcript

Rishabh Singh: The idea is, instead of programmers specifying or writing a program step-by-step with every statement, instead of specifying the complete logic, what if we can allow programmers to specify their intent for what they want to program to do at a high level?

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research, and the scientists behind it. I’m your host, Gretchen Huizinga.

Every day, computers take on more and more of our daily tasks. Fill in a few cells on your spreadsheet? It’ll fill in the rest. Ask your car for directions? It’ll get you there. Anymore, we can program computers to do almost anything. But what about programming computers to… program computers? That’s a task that Dr. Rishabh Singh, and the team in the Cognition group at Microsoft Research, are tackling with neural program synthesis, also known as artificial programming. Today, Dr. Singh explains how deep neural networks are already training computers to do things like take classes and grade assignments, shares how programmers can perform complicated, high-level debugging through the delightfully-named process of neural fuzzing, and lays out his vision to democratize computer programming in the brave new world of Software 2.0.

That and much more on today’s episode of the Microsoft Research Podcast.

Host: Rishabh Singh, welcome to the podcast.

Rishabh Singh: Thanks a lot, Gretchen, yes.

Host: It’s great to have you here.

Rishabh Singh: Great to be here.

Host: Yeah. So, your research resides under Research in Software Engineering or the RiSE group at MSR, but your specific area is the relatively new group, just called Cognition. Tell us what goes on under the Cognition umbrella.

Rishabh Singh: Yeah, I can briefly talk about that. So, about a couple of years back, we found this group called Cognition where the idea was how do we build the next generation of AI systems that go towards general intelligence? And the idea was to bring together people from deep learning, reinforcement learning and programming languages such that we can build systems that can learn complex algorithmic tasks. So, for example actually, we have seen a lot of recent breakthroughs in deep learning where systems have achieved even super human performances on recognizing images, understanding text, uh, speech… But one thing that has been there amongst all these successes is that these networks, what they are learning are still relatively simpler models, tasks where these networks learned patterns. But to get towards more general intelligence, we need systems that can learn to perform a complex sequence of tasks. And that was something when we started this group, we wanted to build these next generation of network architectures that can learn to generate programs. And more broadly we call this area neural program synthesis, basically training these networks to generate programs.

Host: Okay. So, let’s do a little bit of a level-set, because our audience is fairly sophisticated, as you know.

Rishabh Singh: Yeah.

Host: But I always like to do a layman’s version of what we’re going to spend a half hour talking about.

Rishabh Singh: Um-hum.

Host: And you’ve sort of alluded to it already, but give us a kind of definition of program synthesis, because that’s sort of the nugget of what you’re working on, and why we need it.

Rishabh Singh: Yeah, that’s a great point actually. So, the idea is, instead of programmers specifying or writing a program step-by-step with every statement, instead of specifying the complete logic… what if we can allow programmers to specify their intent for what they want to program to do at a high level? Some examples might be maybe we can specify the intent by either using a few number of input/output examples or test cases or maybe in natural language. And the goal of program synthesis is to take those under-specified specifications and generate programs that are consistent with it. Now, the reason we need program synthesis is first, not everybody who is using these sophisticated software systems, might be professional programmers. So, for them to be able to get maximum leverage out of these systems, we can enable them to also perform complicated tasks by having simpler mechanisms for them to specify the intent. One example, is let’s say Microsoft Excel, where millions and millions of users use the system, but not all of them are programmers. They come from various backgrounds. If it can allow them to specify the intents using examples or natural language, then we can really democratize the notion of programming. So that’s one reason, actually, I’m quite excited about that. But even for more sophisticated users, let’s say professional programmers, there are many tasks that are quite tricky and complex. Instead of them having to reason about it all manually, we now have efficient search algorithms that we can leverage to remove some of that burden out of these programming tasks and let the machine perform that kind of tricky reasoning and have programmers perform more high-level reasoning.

Host: You gave a talk, that people can find on the MSR website, called Programming Synthesis for the Masses…

Rishabh Singh: Right.

Host: And you talk about democratizing computer programming which sounds like what you just explained right there.

Rishabh Singh: Yes, yes.

Host: Is taking a very complex activity, programming a computer, to do something you want it to do.

Rishabh Singh: Right.

Host: But allowing it to be accessible to… you mentioned students, end-users and programmers…

Rishabh Singh: Right.

Host: Is there more to the masses? I mean, would it be useful for someone like me?

Rishabh Singh: Actually, the hope is yeah. The hope is, I think, there are newer and newer applications people are using program synthesis for. For example, there are now speech systems where somebody can specify their high-level intent using speech for these voice assistants. And then in the back-end, the idea is to take that high-level specification and break it down into smaller sub-tasks and then compose them to perform the original task that the user intended to do. So, I think some flavor of these techniques are now being used in almost every day use-cases now. There are many classes of users and I believe, yeah, program synthesis techniques can help pretty much any scenario.

Host: So, are you aiming for a time when I could just articulate what I’d like to do and then the computer’s resources that it can draw on, could actually take cues from me and execute?

Rishabh Singh: Yeah, exactly. That’s the vision that we want to get towards. I think certain domains we can first target to do such things. For example, in Excel let’s say, a user can specify what they want to do either in natural language or speech and then we can have the machine in the background search over the space of programs and then find one that fits their intent. And then there can be many other domains where something like that could possible.

Host: For my benefit, can you explain when it says “search over the range of programs,” what kind of programs would it be looking for?

Rishabh Singh: Yeah, that actually goes back to the definition of program synthesis. The way we frame this problem is, it’s essentially a search problem where the goal is, we have some programming language – it could either been Turing complete, specifying all possible programs – or it could be domain-specific as I was saying, maybe for programs inside Excel or programs for web extraction. But the search problem essentially says that a programming language defines a space of programs that could be written in that language. And given a specification, the goal is to search over this large space of programs to find one program that satisfies the specification users provided. So, it’s this giant search problem. And that’s why actually it’s one of the hardest problems as well, because the search space is humungous. Actually, in some cases even infinite, because if the language allows for recursion and arbitrary expansions, yeah. There’s a giant space, and a lot of research that happens is how to make the search more tractable, or how to actually make the search process more efficient in different domains.

Host: That sounds like you’ve got your work cut out for you and a lot of research opportunities over the next – I don’t know, years? Decades?

Rishabh Singh: Yeah. And that’s the exciting part, yeah. I think we’ve already had quite a few recent breakthroughs and certainly, recently, with more advances in hardware, in CPU speed and then also newer search methods like deep learning techniques and constraint solving, people are now trying to combine these newer techniques to make the search process even more efficient.

[music]

Host: You know, I read a lot of your papers before this podcast and… One of them was called Artificial Programming. That one resonates to me because that makes sense. I get what you are saying. It’s like artificial intelligence, artificial programming. But it is program synthesis, yes?

Rishabh Singh: Right, exactly. We were trying to think of a better way to describe this whole system. But actually, one more motivation of defining it that was, there’s been a lot of work previously in program synthesis which has mostly been based on rule-based or more algorithmic way to perform search, which has required lots and lots of human insights to make the search tractable. And, in this case actually, what we were trying to do was, can we frame the learning problem in terms of learning abstractions using neural networks, and that which was related to artificial intelligence? So that’s one thing. But, on top of that, there’s several other benefits we get if it can learn programs. First, that we can learn interpretable models. So, whatever the network is learning, since it’s producing a program, one can look at what the network has produced, they can debug the learned program, they can modify it if doesn’t work the right way. But at least if it’s more interpretable than the current networks that are basically learning a set of low-level weight matrices. And also, one more benefit we get from learning programs is that these models tend to be more generalizable. So, these programs tend to also work well on newer inputs or newer data sets that we have not seen in training.

Host: That was actually a question I had in trying to wrap my brain around this, because having talked to a couple of other researchers out here on programming languages specifically, there’s so many of them and they fall into different categories and they are each useful for a particular kind of task. And within program synthesis, how does it sort through the kinds of languages that might be useful for that particular program? I mean is that part of the search?

Rishabh Singh: Yeah, that’s actually an excellent point. Yeah. So, a lot of research actually also goes on into designing these languages. The task is actually we want to design, for any given domain, we want to come up with languages that are expressive because we want it to be able to express many, many tasks in that domain. But at the same time, we want to constrain it such that it is easier to learn or perform search in some sense. So, there’s always this trade-off between expressivity and conciseness of the language. And a lot of work actually goes into designing languages specifically for program synthesis.

Host: Yeah.

Rishabh Singh: And actually, similar things have happened in our work as well. We’ve been looking at various domains, designing various domain-specific languages for different program synthesis applications, and since we wanted to get towards a system that can learn programs in a very general-purpose language, we’ve been taking steps towards adding newer features, one at a time, to start from maybe a very small language, then adding few features and then getting towards a complete language, let’s say, like Python or R.

[music]

Host: The work you’ve done in neural program synthesis for learning string transformations in a functional language like AutoFill and FlashFill for Excel. But you’ve got this recent work using a more challenging imperative language, Karel, in which you’ve built this program that took an introductory computer science class and passed. I don’t think it was a grade you might have been happy with, but for a machine it was pretty good. Tell us that story.

Rishabh Singh: Yeah. So, FlashFill was a system we developed a few years back for Excel users to enable them to program using fewer input/output examples for string transformation, but this was mostly an algorithmic system where we came up with new efficient ways to search over the space of programs in the domain-specific language we designed for FlashFill. And we were actually quite happy that we can build these systems that can learn programs in a language like FlashFill which is mostly a functional language. It’s learning composition of functional abstractions. Then we wanted to see actually what happens if we make the language a little bit more complex. So, what if we can add, a bit of control flow over these functions like loops and conditionals? And interestingly enough, we found this language called Karel which is taught in Stanford’s introductory programming class. Students, they are basically given a set of input/output worlds where robots are in certain position and there are markers and blocks. And so they have to write these programs in the Karel language to basically move robot, starting from the input world such that it ends up in the output world. And that was one of the reasons we chose Karel, because it had control flow and it added one level more complexity over the FlashFill language. Secondly, as you mentioned Gretchen, we also wanted to see actually how well it would do in not just learning programs that we generate in the language synthetically, but also, can it solve problems in a class test setting that students typically go through? And yet, it was interesting that when we trained the system to perform program synthesis in this language, for the class test right now, it was able to solve 7 out of 16 problems which was yeah, which is still I think better than what we expected. But we are now improving the system to get even better, yeah.

Host: So, it needs to get better grades in order to, you know, get in the parade.

Rishabh Singh: Yeah. So, the hope is actually, keep improving the system and then we can see, yeah, how well it performs compared to yeah, human programmers that are also learning to program from scratch.

Host: Which is interesting because you are like working side-by-side with beginning programmers and this program that’s a beginning programmer.

Rishabh Singh: Exactly, right, right. So, the hope is that these systems also starting from somewhat scratch and then, maybe in the beginning they are not so good at learning programs, but over time, over experience, they are getting better and better. And that’s also one of the things we want to look at in future where right now we’re training these systems individually for every domain. For FlashFill we were training a separate system, for Karel we are training a separate system. Is there a way we can leverage knowledge across domains? Can we build a system that actually learns from experiences, not just in one domain, but be able to transfer the knowledge to other domains a well?

Host: So, doesn’t that kind of lead into the work you are doing in inductive programming?

Rishabh Singh: Basically, this leads towards the notion of general intelligence, that humans, we are able to learn something in one context and then use it later on. We don’t have to learn everything from scratch. In the same way, right, if we can build a system that can evolve over time and continuously keeps adapting itself and improving itself from experiences, that will lead towards more general intelligence.

Host: Yeah. So, I guess where I was heading with the question is like you talk about these very specific tasks with a specific language within Excel or within Karel or so on. And the goal eventually would be that these systems could work among the different kinds of things and make the same kinds of gains that humans do with their general intelligence.

Rishabh Singh: Yeah, absolutely, right. And also similar to humans, right, we are able to write really complex and large programs in general-purpose languages. So, if we are able to learn across domains, hopefully we can also increase the complexity of the domain-specific language in which we learning, and hopefully at some point, it becomes almost a general-purpose language.

Host: So, I used to teach English, and I know what it’s like to stare at a stack of essays, overwhelmed by the task of having to grade them and make meaningful contributions, and I often remember thinking, “I’m only ever going to be asymptotically approaching zero. I’m never going to get this stack done.” Tell me the story about Auto Grader and our shared experience, our shared pain and what you did about it.

Rishabh Singh: Actually, this was something that started when I was a teaching assistant for the introductory programming class at MIT. And actually, we used to go through the same process. We had a few hundred students in the class, and they would submit their programming assignments, and quite often actually this would happen, they would make small mistakes in their programs, and if you just give them traditional feedback that says either your program works or it doesn’t work, first of all, there might be just some small mistakes and we are penalizing students for that. And secondly, there’s no meaningful feedback to students as well about how they can improve, if you just say pass or fail.

Host: Exactly.

Rishabh Singh: So, that’s why actually we, when we were TAs actually we used to go through each program manually, try to understand what the problem might be, and give some more detailed feedback on what might be going wrong and how they can think about improving their solutions. And then actually around similar time, edX was also starting out and we decided to also offer this introductory programming class on edX. And the scale there was completely different from the classroom setting. We had maybe 200, 300 students in the class; whereas on edX, I think the very first offering had more than 100,000 students sign-up and there was no way actually we can manually go through all the submissions and give meaningful feedback. And then actually, that’s when the notion of this, notion of auto grader or giving automated feedback to students came up. The idea was actually, since specifically in programs… programming courses, we thought actually program synthesis might help in this setting. So, let’s say a student submits a solution for any given problem. Now, there are many, many different ways to solve the same problem. There are many different algorithms. And even for a given algorithm, there are many different programmatic choices one can make because the languages are very expressive. So, we cannot just come up with a solution that can do any kind of syntactic reasoning because that would be too much work and it’s not even possible to come up with all possible ways students are going to solve the problem. But that’s where we can leverage program synthesis. The way we framed this problem was, given an incorrect submission, can we find minimum number of changes to the original student submission that is incorrect such that it becomes functionally equivalent to teacher’s submission. And one thing that this particular system needed was, basically a teacher had to come in and define a set of common mistakes students are making. So there was a little bit of teacher effort, but that wasn’t too much. And the hope was once that is there, then it could be reused across multiple years and more similar assignments. So, that was something we were doing when I was at MIT doing my PhD. But afterwards, when I came to Microsoft Research, we have the Learning Experiences team here that also offers many, many interesting courses on edX platform which was quite interesting, and it was lovely to talk to them. And we’ve been collaborating with them now to also build these systems to help students taking those courses. One interesting thing we’d recently done is to also reduce the amount of effort teachers now need to do. So instead of teachers having to specify space of edits or changes, we leverage all the previous student submissions. Since the scale is quite big, we can actually automatically try to learn the space of changes from the data itself. So, our current auto grading system basically just takes a set of previous submissions, a few thousands, and then automatically learns the space of changes and then uses program synthesis techniques to find changes to new student submissions.

Host: So, is this actually in use in a lot of venues, educational venues?

Rishabh Singh: Yeah, actually we just deployed it, one version of it I think a few months back to the C# programming course here at Microsoft. So currently we are still doing some studies about what level of feedbacks are actually more helpful to students because we can either give a bit of hint about saying, “There is something wrong about your program at this place,” or we can go all the way and say, “Here is the mistake and this is how you want to change it”. So, we’ve been right now just taking initial feedback and trying to see what different kinds of feedbacks, what kind of effect it has on students and what is the right time to provide what kinds of hints.

[music]

Host: Let’s talk a little bit about neural networks, and how, kind of, this neural deep learning is representing what some people are calling a fundamental shift in how we write software. Some have referred to it as software 2.0.

Rishabh Singh: Right.

Host: Tell me about that.

Rishabh Singh: Yeah, so actually this is quite fascinating. I think we are changing at a really rapid pace now. And, as AI is becoming more and more prevalent and then we are using newer and newer machine-learning systems as a core part of our software, as a result of that actually, not all software is written by humans, or programmers, specifically. So, there are all these components that are typically learned from different data sets. So, it changes the way we think about how software is going to be developed in future. Since many parts of it are going to come from these learned systems, we need ways to effectively manage how to compose piece of code that is written by programmers, piece of code that is coming as a black box from outside, a learned model. And at the same time, it also introduces many challenges. Yeah. First, is interpretability, that if you are just using black box models, it’s hard to really give guarantees about what exactly is happening. These are going to be stochastic models, so we will have to write code that somehow understands that variability and tries to ensure that even if it is off by some amount, the software will still be making reasonable decisions.

Host: That’s a hard task in itself.

Rishabh Singh: Yeah, that’s a great challenge. The way I think about software 2.0 going forward is, we are going to have a shift in how programmers think about programming, in the sense that they don’t have to specify each and every step of the computation imperatively. They can actually provide the intent at more high-level concepts, like test cases or maybe partial program. They can write piece of code, but keep some parts of the code empty or maybe in natural language they can specify what they want this particular function to do. And then we’re going to have these synthesizers or smart compilers that would take these ambiguous, under-specified intent and convert that into code for us or something that we can execute. It’s still unclear what exactly that future is going to be.

Host: Sure.

Rishabh Singh: And hopefully at some point we’ll converge to one common framework where we can have this notion of multi-modal specification, where we allow all different ways to specify the intent and have these smart compilers or synthesis systems that can understand them and generate code that satisfies the intent.

Host: So, the sensational way of describing Software 2.0 is that, at some point, humans will stop writing code and the machines will take over. But what I’m hearing is that this complementarity of humans and machines is going to remain and that it’s never going to be, you know, this is going to replace Software 1.0…

Rishabh Singh: Yeah, that’s actually a great way to put it, Gretchen, yeah, basically it’s not the case that the goal is to replace programmers; the goal is actually to empower programmers to achieve even more. And even here actually, programmers still need to provide the specification or intent on what the synthesis system needs to perform. The way we have evolved over time, we used to have these low-level assembly languages, in the beginning, where somebody would go in and write all these low-level assembly instructions, move data from one register to another. Then we evolved into high-level languages where the idea was instead of writing assembly code, we’ll have compilers which will go from high-level languages to assembly, and programmers can actually specify, write their programs in a high-level of abstraction. Then we’ll have these more, even smarter compilers that can take those kinds of ambiguous, under-specified intent and still be able to generate code that is consistent. Yeah, we’re not going to replace programmers, but just make them even more – or basically make everybody a super programmer, right?

Host: So, along the spectrum of ability, you are going to make professional programmers be able to do better at what they do, and programmers that aren’t even programmers, be able to program within specific parameters.

Rishabh Singh: Yeah, exactly. There’s this whole spectrum, in the sense, yeah, we can make developers even more productive. But, yeah, there’s this large class of audience who are not necessarily developers or programmers, but we can also enable them to perform tasks, like programmatic tasks in various domains, without having to know how to program things, yeah. To be able to… for them to specify what they want the system to do at a high-level abstraction and let the machine generate the code for them.

Host: That’s really exciting. I mean, that’s enabling access to what’s usually been reserved for a very small subset of the population on the planet.

Rishabh Singh: Yeah, and that’s something I’m actually most excited about, where we can build systems which can enable people from all different professions to use computing and achieve these tasks that require programming of some sort.

Host: Yeah, yeah. So, let’s talk about fuzzing for a minute. Regular fuzzing versus neural fuzzing. Will you unpack that for us?

Rishabh Singh: Oh, sure. Actually, let me start by – so fuzzing is this process where which has had quite remarkable success in almost every software company. The idea is actually, we want to write code that is secure, that doesn’t have any bugs. And the idea of fuzzing is actually to automate some of that testing process. So, instead of a human or a tester writing these programs to test, the idea is, we’ll have a process that can generate random inputs. It can either take an input, put it randomly again and again, and check if any one of those inputs crashes the system. Then we know there is something wrong with the code and we also have an input that triggered the bug. It has been actually quite successful in various companies and it has found many, many safety-critical bugs or security bugs. So, this works quite well for binary formats where the inputs are, let’s say, an image and I can put up pixels and check if any one of them crashes the system or leads to some vulnerability. But when we have more structured input formats, let’s say a PDF file or a Word file or a .doc format, the problem is, if we make random changes to those files, very quickly they would become invalid files. And the parser would just say, “Oh this is an invalid PDF file.” So, one of the challenges is how do we do fuzzing for these more structured formats such that we can still generate new random inputs, but that still adheres to the grammar of the format? And that’s where we were using some of these neural synthesis techniques to be able to learn grammars. And over time, it turns out actually, it’s able to automatically learn some notion of grammar of these formats that makes the fuzzing process much more efficient. It only generates good-looking inputs. And it also reflects then when we were trying out these new systems that are augmented with these neural grammar learning techniques, they are able to generate inputs that cover more parts of the code. And they also find many more crashes than the current state-of-the-art system. So that was also a really nice application of learning high-level representations of different formats. It’s actually interesting now with recent advances in these learning techniques, the systems are becoming more efficient. So now we are seeing more and more usage of these learning techniques for fuzzing.

[music]

Host: Talk a little bit about your path to Microsoft Research from your beginnings in India. What got you into this and how did you end up here of all places?

Rishabh Singh: Oh, that’s – that’s quite deep. Yeah, so actually, um, growing up, I was in India and then I did my undergraduate at IIT Kharagpur in computer science. So, I really liked mathematics, but at the same time, this notion of being able to automate things by writing programs, that was quite fascinating, and I joined MIT after finishing up my undergraduate degree to do research in software verification. That was my starting point. But then, slowly I got more interested in program synthesis where, instead of writing code and then doing verification, what if actually we write code that was correct by construction? So then, after graduating, I was considering academic positions and Microsoft Research. And I really liked the atmosphere here, we had great sets of researchers, but also, I think MSR, it has some unique advantages compared to academic places which I really liked. And then I decided to be here, and it’s been almost 3 and a half, 4 years now. It’s been quite a lot of fun.

Host: Yeah. So, as we think of your future, what excites you right now? What are you working on? I know your, sort of, nugget is program synthesis, but is there any new thread coming off of that that’s got you really excited?

Rishabh Singh: Yeah, that’s something at Microsoft Research, I found which was quite unique compared to other places. When we formed this group, Cognition group, we were able to bring in researchers from many different backgrounds, come together and work on this larger vision of doing neural program synthesis. And for a long time, we were using more algorithmic and more rule-based systems to perform search. Whereas, I think the fascinating thing has been in the last few years we’ve been building these architectures and learning search algorithms from data itself. And that’s something I’m actually quite excited about. And also this notion of actually combining these more neural approaches with more symbolic approaches, and this combination of symbolic AI and neural AI is something also quite exciting and that’s what something in the group also we are hoping to, explore even more going forward.

Host: Yeah, yeah. If you had to say, at the end of your career, “This is what I would define success as, in my field,” what would be the definition of success?

Rishabh Singh: So, I think one of my dreams… I don’t know when we will ever get there, or if we will ever get there, but… so, right now we are able to synthesize small snippets, few lines, but if we can build a system that can actually be able to assist developers write sizable amounts of code from high-level specifications or high-level intent, that would be quite magical if we can do that. And that’s one of my dreams at maybe end of the career if I can see that happening; that would be super fantastic, yeah.

Host: Rishabh Singh, it’s been great having you.

Rishabh Singh: Thanks a lot, Gretchen, for having me, yeah. It was great to talk to you.

To learn more about Dr. Rishabh Singh’s work, and the fascinating advances in machine learning and neural program synthesis, visit Microsoft.com/research

SAP offers extra help on HR cloud migrations

SAP recently launched a program that offers services and tools to help with an HR cloud migration. The intent is to help HR managers make a business case and to ease some of the initial integration steps.

 SAP has seen rapid growth of its SuccessFactors cloud human capital management platform. But the firm has some 14,000 users of its older on-premises HCM suite, mostly in Europe, who have not fully migrated. Some are in a hybrid model and have been using parts of SuccessFactors.

Customers may feel “a lot of trepidation” over the initial HR cloud migration steps, said Stephen Spears, chief revenue officer at SAP. He said SAP is trying to prove with its new Upgrade2Success program “that it’s not difficult to go from their existing HR, on-premises environment to the cloud.”

The problems that stand in the way of an HR cloud migration may be complicated, especially in Europe.

HR investment remains strong

The time may be right for SAP to accelerate its cloud adoption efforts. HR spending remains strong, said analysts, and users are shifting work to HR cloud platforms.

If I were a cloud HR systems provider, I would be very excited for the future, at least in North America.
David Wagnervice president of research, Computer Economics

IDC said HCM applications are forecast to generate just over $15 billion in revenues globally this year, up 8% over 2017. This does not include payroll, just HCM applications, which address core HR functions such as personnel records, benefits administration and workforce management.

The estimated 2018 growth rate is a bit below prior year-over-year growth, which was 9% to 10%, “but still quite strong versus other back office application areas,” said Lisa Rowan, an IDC analyst. Growth is being driven in part by strong interest in replacing older on-premises core HR systems with SaaS-based systems, she said.

Cloud adoption for HR is strong in U.S.

Computer Economics, a research and consulting firm, said that in terms of organizational spending priorities HR is “right down the middle” of the 14 technologies it tracks, said David Wagner, vice president of research at the firm. It surveyed 220 companies ranging from $50 million to multibillion-dollar firms.

“Investment is higher in everything this year,” Wagner said, but IT operational budgets are not going up very fast and the reason is the cloud transition. Organizations are converting legacy systems to cloud systems and investing the savings back into the IT budget. “They’re converting to the cloud as fast as is reasonable in organizations right now,” he said.

“If I were a cloud HR systems provider, I would be very excited for the future, at least in North America,” Wagner said.

Cloud adoption different story in Europe

But Europe, where SAP has about 80% of its on-premises users, may be a different story.

Wagner, speaking generally and not specific to SAP, said the problem with cloud adoption in Europe is that there are much more stringent compliance rules around data in the cloud. There’s a lot of concern about data crossing borders and where it’s stored, and how it’s stored and encrypted. “Cloud adoption in general in Europe is behind North America because of those rules,” he said.

SAP’s new cloud adoption program brings together some services and updated tools that help customers make a business case, demonstrate the ROI and help with data integration. It takes on some of the work that a systems integrator might do.

Charles King, an analyst at Pund-IT, said SAP is aiming to reduce the risk and uncertainties involved in a sizable project. 

“That’s a wise move since cost, risk and uncertainty is the unholy trinity of bugaboos that plague organizations contemplating such substantial changes,” King said.

The December release of SQL Operations Studio is now available

This post is authored by Alan Yu, Program Manager, SQL Server.

We are excited to announce the December release of SQL Operations Studio is now available.

Download SQL Operations Studio and review the Release Notes to get started.

SQL Operations Studio is a data management tool that enables you to work with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux. To learn more, visit our GitHub.

SQL Operations Studio was announced for Public Preview on November 15th at Connect(), and this December release is the first major update since the announcement.

The December release includes several major repo updates and feature releases, including:

  • Migrating SQL Ops Studio Engineering to public GitHub repo
  • Azure Integration with Create Firewall Rule
  • Windows Setup and Linux DEB/RPM installation packages
  • Manage Dashboard visual layout editor
  • “Run Current Query with Actual Plan” command

For complete updates, refer to the Release Notes.

Migrating SQL Ops Studio Engineering to public GitHub repo

To provide better transparency with the SQL Operations Studio community, we have decided to migrate the Github internal branch to the public repo. This means any bug fixes, feature developments, or even test builds can be publicly viewed before an update is officially announced.

We made this move because we want to collaborate with the community to continually deliver features that our users want. This gives you the opportunity to see our changes in action to address your top voted issues. Visit our GitHub page and give us your feedback.

Azure Integration with Create Firewall Rule

Now let’s get into new features. A common issue when connecting to Azure SQL DB instances is that the connection can fail due to server firewall rules. This would require loading Azure Portal to configure firewall rules so that you can connect to your database, which can be inconvenient.

To speed up this process, we have enabled Azure Integration with Create Firewall Rule dialog. When your connection to an Azure SQL DB instance fails because of firewall settings, this dialog will appear, allowing the user to use their Azure subscription account to automatically configure the client IP address with the server. This retains the same experience as configuration on Azure Portal, except you can do it all through SQL Operations Studio.

Windows Setup installation and Linux DEB/RPM installation packages

We are always looking for new ways to improve the installation experience. With the December release, we have added Windows Setup installation to simplify installation on Windows. This wizard will allow the user to:

  • Select installation location
  • Select start menu folder
  • Option to add to path

In addition to Windows Setup, we have also added Linux DEB/RPM installation packages. These will add new ways for Linux users to download SQL Operations Studio for their choice of installation.

Feel free to try out these new installation experiences on our download page.

Manage Dashboard visual layout editor

In the initial release, there were not many options to customize the visual layout of the dashboards. With the December release, you can now resize and move your widgets by enabling the visual layout editor mode by clicking the pencil on the top right of the Manage Dashboard screen. This gives users greater control of their dashboard in addition to building their own custom insight widgets.

Run Current Query with Actual Plan command

Another new feature we have enabled is Run Current Query with Actual Plan, which is a command that will execute the current query and return the actual execution plan with the query results. This feature area is still in-progress as we work through the best UX for integrating this command directly into the query editor. While that design work is in-progress the functionality is still available via the Command Palette and you can define a keyboard shortcut if using this feature frequently.

Contact us

If you have any feature requests or issues, please submit to our GitHub issues page. For any questions, feel free to comment below or tweet us @sqlopsstudio.