Tag Archives: Cloud

Announcing Microsoft Build Tour 2017

Figure 1 Sign-up at http://buildtour.microsoft.com

On the heels of the Build conferences these last few years, we have had the pleasure of meeting thousands of developers around the world. Their feedback and technical insight has helped us to continue the tradition and explore more technical depth.

Today, I’m excited to announce the Microsoft Build Tour 2017, coming to cities around the globe this June! The Microsoft Build Tour is an excellent way to experience Microsoft Build news first-hand, and also work directly with Microsoft teams from Redmond and your local area.

This year, we’re bringing the Microsoft Build Tour to these locations:

Dates

City

June 5-6 Shanghai, China
June 8-9 Beijing, China
June 12 Munich, Germany *
June 13-14 Seoul, Republic of Korea
June 14-15 Helsinki, Finland
June 19-20 Warsaw, Poland
June 21-22 Hyderabad, India
June 29-30 Sydney, Australia

The Microsoft Build Tour is for all developers using Microsoft platform and tools. We will cover a breadth of topics across Windows, Cloud, AI, and cross-platform development. We will look at the latest news around .NET, web apps, the Universal Windows Platform, Win32 apps, Mixed Reality, Visual Studio, Xamarin, Microsoft Azure, Cognitive services, and much more.

We also want to go deeper into code, so this year we’re bringing the tour as a two-day* event. You can decide to attend just the sessions on the first day, or sign-up for a deep (and fun!) hands-on experience on the second day.

  • Day 1: Full day of fast-paced, demo-driven sessions, focusing primarily on new technology that you can start using immediately in your projects, with a bit of forward-looking awesomeness for inspiration.
  • Day 2: Full day hackathon where you’ll use the latest technologies to build a fun client, cloud and mobile solution that meet the requirements of a business case given at the beginning of the day. Seating is limited for Day 2, so be sure to register soon!

In most locations, on both days, we’ll also have a Mixed Reality experience booth where you’ll be able to sign up for scheduled hands-on time with Microsoft HoloLens and our latest Windows Mixed Reality devices.

To learn more and register, visit http://buildtour.microsoft.com. Can’t make it in person? Select cities will be live-streamed regionally to give you a chance to view the sessions and participate in the Q&A.

We can’t wait to see you on the tour!

*Munich is a single day, session-focused event.

Read More

Introducing Microsoft’s new Ad Monetization Platform

Earlier this week, we kicked off our annual developer conference Microsoft Build in front of over 5,000 developers at Washington state convention center with viewers watching the event live around the world. During the keynote and other live sessions, we had several announcements that featured tools and services for PC, Tablet & Web developers to monetize and acquire through the Microsoft’s new ad monetization platform.

Microsoft’s new ad monetization platform brings together innovative consumer ad experiences, federated cloud based smart mediation service, and best-in-class universal user acquisition and engagement service in a single platform, so you can maximize your ad revenue and grow your app business by deeply engaging with your users.

Let’s take a closer look at the major announcement.

Maximize your ad revenue

Ad monetize your app with the new Microsoft’s ad mediation platform

Microsoft ad mediation platform

In-app advertising continues to represent more than one-third of the revenue that developers make from writing apps for the Microsoft Windows platform. As part of our continuous commitment to maximize developer monetization through ads, we are excited to announce Microsoft’s ad mediation service, a federated cloud-based ad mediation service designed to help app developers maximize their ad revenue. The ad mediation service along with the Microsoft Advertising SDK are the two key components of Microsoft’s new ad monetization platform that dynamically optimizes ad network configurations to drive the highest yield for the developers and deliver innovative ad experiences for consumers.

Microsoft’s ad mediation service is now available to all developers through the Dev Center. Read more about this announcement here.

Choose the right ad experiences

Drive app advertising revenue by seamlessly integrating the right ad treatments into your app by our rich set of ad formats. Wide variety of formats means you have lots of choices to provide the best user experience.

Interstitial ads

We are also excited to announce the launch of interstitial banner ads support in the Microsoft Advertising SDK. Interstitial banner ads have been one of the top Windows Dev Center feature requests since we introduced support for interstitial video ads. Read more about this announcement here.

Native ads

Microsoft Advertising SDK now supports native ads. Native ads allow developers to create and implement highly immersive ads that fit their app experiences. Developers can now stitch beautiful ad experience completely native to their apps using the new capability provided by the SDK. This feature is currently available as a limited preview to participating developers/publishers.

Please contact aiacare@microsoft.com to get started.

Empowering app growth for Windows developers

Ad campaign to acquire & re-engage with users

Microsoft Universal User Acquisition Platform offers unparalleled self-served tools from Windows Dev Center & REST APIs to help developers promote & re-engage with users. Dev Center App Install Ad Campaigns underwent a major usability improvement. The new workflow helps developers automate campaign targeting, and creative generation to find your most valuable users based on your in app conversions. These innovative experiences are currently available for Insider users and will be generally available to everyone along with the overall Dev Center changes. Developers who haven’t used ad campaigns can read about the full suite of capabilities here.

Auto designed interstitial, native and playable ads

We have enabled adding banner interstitial ads to the creative mix without the developer having to create the artwork. Just click on one check box in the creative section and the banner interstitial that can scale across different device resolutions gets created. The best part of this capability is that you can also use the same ad for your house and community campaigns, which are completely free of cost. Just use the latest SDK to enable interstitial ads in your app, and then opt in to house and/or community ad campaigns.

You can promote your app through native ad experiences now. All the elements that are used to create an auto-generated ad like app title, logo, tagline and price etc. will now be sent as assets to the app developer showing the ads so that they can be stitched in an experience native to the app. All new ad campaigns will have these ads automatically created in the backend using the same elements that the user selects for creating ads in the campaign.

Developers can create a three-minute interactive version of the app as an ad using this capability. This feature is in developer preview and developers can sign-up for getting their apps streamed at aiacare@microsoft.com. Read more about this capability here.

What’s next

We look forward to helping many more developers build and grow successful businesses. At Microsoft //Build 2017, we have shared essential insights and best practices on how to drive growth with Microsoft’s new ad monetization platform. You can take part too by viewing the live streamed and pre-recorded sessions ad monetization and user acquisition.

Check out the new website for more details, then give the features a try! If you’ve got suggestions to make these features even more useful, please let us know at Windows Developer Feedback.

Read More

The “Internet of Stranger Things” Wall, Part 3 – Voice Recognition and Intelligence

Overview

I called this project the “Internet of Stranger Things,” but so far, there hasn’t been an internet piece. In addition, there really hasn’t been anything that couldn’t be easily accomplished on an Arduino or a Raspberry Pi. I wanted this demo to have more moving parts to improve the experience and also demonstrate some cool technology.

First is voice recognition. Proper voice recognition typically takes a pretty decent computer and a good OS. This isn’t something you’d generally do on an Arduino alone; it’s simply not designed for that kind of workload.

Next, I wanted to wire it up to the cloud, specifically to a bot. The interaction in the show is a conversation between two people, so this was a natural fit. Speaking of “natural,” I wanted the bot to understand many different forms of the questions, not just a few hard-coded questions. For that, I wanted to use the Language Understanding Intelligent Service (LUIS) to handle the parsing.

This third and final post covers:

  • Adding Windows Voice Recognition to the UWP app
  • Creating the natural language model in LUIS
  • Building the Bot Framework Bot
  • Tying it all together

You can find the other posts here:

If you’re not familiar with the wall, please go back and read part one now. In that, I describe the inspiration for this project, as well as the electronics required.

Adding Voice Recognition

In the TV show, Joyce doesn’t type her queries into a 1980s era terminal to speak with her son; she speaks aloud in her living room. I wanted to have something similar for this app, and the built-in voice recognition was a natural fit.

Voice recognition in Windows 10 UWP apps is super-simple to use. You have the option of using the built-in UI, which is nice but may not fit your app style, or simply letting the recognition happen while you handle events.

There are good samples for this in the Windows 10 UWP Samples repo, so I won’t go into great detail here. But I do want to show you the code.

To keep the code simple, I used two recognizers. One is for basic local echo testing, especially useful if connectivity in a venue is unreliable. The second is for sending to the bot. You could use a single recognizer and then just check some sort of app state in the events to decide if you were doing something for local echo or for the bot.

First, I initialized the two recognizers and wired up the two events that I care about in this scenario.


SpeechRecognizer _echoSpeechRecognizer;
SpeechRecognizer _questionSpeechRecognizer;

private async void SetupSpeechRecognizer()
{
    _echoSpeechRecognizer = new SpeechRecognizer();
    _questionSpeechRecognizer = new SpeechRecognizer();

    await _echoSpeechRecognizer.CompileConstraintsAsync();
    await _questionSpeechRecognizer.CompileConstraintsAsync();

    _echoSpeechRecognizer.HypothesisGenerated +=
                   OnEchoSpeechRecognizerHypothesisGenerated;
    _echoSpeechRecognizer.StateChanged += 
                   OnEchoSpeechRecognizerStateChanged;

    _questionSpeechRecognizer.HypothesisGenerated +=
                   OnQuestionSpeechRecognizerHypothesisGenerated;
    _questionSpeechRecognizer.StateChanged += 
                   OnQuestionSpeechRecognizerStateChanged;

}

The HypothesisGenerated event lets me show real-time recognition results, much like when you use Cortana voice recognition on your PC or phone. In that event handler, I just display the results. The only real purpose of this is to show that some recognition is happening in a way similar to how Cortana shows that she’s listening and parsing your words. Note that the hypothesis and the state events come back on a non-UI thread, so you’ll need to dispatch them like I did here.


private async void OnEchoSpeechRecognizerHypothesisGenerated(
        SpeechRecognizer sender,
        SpeechRecognitionHypothesisGeneratedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        EchoText.Text = args.Hypothesis.Text;
    });
}

The next is the StateChanged event. This lets me alter the UI based on what is happening. There are lots of good practices here, but I took an expedient route and simply changed the background color of the text box. You might consider running an animation on the microphone or something when recognition is happening.


private SolidColorBrush _micListeningBrush = 
                     new SolidColorBrush(Colors.SkyBlue);
private SolidColorBrush _micIdleBrush = 
                     new SolidColorBrush(Colors.White);

private async void OnEchoSpeechRecognizerStateChanged(
        SpeechRecognizer sender, 
        SpeechRecognizerStateChangedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        switch (args.State)
        {
            case SpeechRecognizerState.Idle:
                EchoText.Background = _micIdleBrush;
                break;

            default:
                EchoText.Background = _micListeningBrush;
                break;
        }
    });
}

I have equivalent handlers for the two events for the “ask a question” speech recognizer as well.

Finally, some easy code in the button click handler kicks off recognition.


private async void DictateEcho_Click(object sender, RoutedEventArgs e)
{
    var result = await _echoSpeechRecognizer.RecognizeAsync();

    EchoText.Text = result.Text;
}

The end result looks and behaves well. The voice recognition is really good.

gif1

So now we can talk to the board from the UWP PC app, and we can talk to the app using voice. Time to add just a little intelligence behind it all.

Creating the Natural Language Model in LUIS

The backing for the wall is a bot in the cloud. I wanted the bot to be able to answer questions, but I didn’t want to have the exact text of the question hard-coded in the bot. If I wanted to hard-code them, a simple web service or even local code would do.

What I really want is the ability to ask questions using natural language, and map those questions (or Utterances as called in LUIS) to specific master questions (or Intents in LUIS). In that way, I can ask the questions a few different ways, but still get back an answer that makes sense. My colleague, Ryan Volum, helped me figure out how LUIS worked. You should check out his Getting Started with Bots Microsoft Virtual Academy course.

So I started thinking about the types of questions I wanted answered, and the various ways I might ask them.

For example, when I want to know the location of where Will is, I could ask, “Where are you hiding?” or “Tell me where you are!” or “Where can I find you?” When checking to see if someone is listening, I might ask, “Are you there?” or “Can you hear me?” As you can imagine, hard-coding all these variations would be tedious, and would certainly miss out on ways someone else might ask the question.

I then created those in LUIS with each master question as an Intent, and each way I could think of asking that question then trained as an utterance mapped to that intent. Generally, the more utterances I add, the better the model becomes.

picture1

The above screen shot is not the entire list of Intents; I added a number of other Intents and continued to train the model.

For a scenario such as this, training LUIS is straight forward. My particular requirements didn’t include any entities or Regex, or any connections to a document database or Azure search. If you have a more complex dialog, there’s a ton of power in LUIS to be able to make the model as robust as you need, and to also train it with errors and utterances found in actual use. If you want to learn more about LUIS, I recommend watching Module 5 in the Getting Started with Bots MVA.

Once my LUIS model was set up and working, I needed to connect it to the bot.

Building the Bot Framework Bot

The bot itself was the last thing I added to the wall. In fact, in my first demo of the wall, I had to type the messages in to the app instead of sending it out to a bot. Interesting, but not exactly what I was looking for.

I used the generic Bot Framework template and instructions from the Bot Framework developer site. This creates a generic bot, a simple C# web service controller, which echoes back anything you send it.

Next, following the Bot Framework documentation, I integrated LUIS into the bot. First, I created the class which derived from LuisDialog, and added in code to handle the different intents. Note that this model is changing over time; there are other ways to handle the intents using recognizers. For my use, however, this approach worked just fine.

The answers from the bot are very short, and I keep no context. Responses from the Upside Down need to be short enough to light up on the wall without putting everyone to sleep reading a long dissertation letter by letter.


namespace TheUpsideDown
{
    // Reference: 
    // https://docs.botframework.com/en-us/csharp/builder/sdkreference/dialogs.html

    // Partial class is excluded from project. It contains keys:
    // 
    // [Serializable]
    // [LuisModel("model id", "subscription key")]
    // public partial class UpsideDownDialog
    // {
    // }
    // 
    public partial class UpsideDownDialog : LuisDialog<object>
    {
        // None
        [LuisIntent("")]
        public async Task None(IDialogContext context, LuisResult result)
        {
            string message = $"Eh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }


        [LuisIntent("CheckPresence")]
        public async Task CheckPresence(IDialogContext context, LuisResult result)
        {
            string message = $"Yes";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("AskName")]
        public async Task AskName(IDialogContext context, LuisResult result)
        {
            string message = $"Will";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("FavoriteColor")]
        public async Task FavoriteColor(IDialogContext context, LuisResult result)
        {
            string message = $"Blue ... no Gr..ahhhhh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("WhatIShouldDoNow")]
        public async Task WhatIShouldDoNow(IDialogContext context, LuisResult result)
        {
            string message = $"Run";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        ...

    }
}

Once I had that in place, it was time to test. The easiest way to test before deployment is to use the Bot Framework Channel Emulator.

First, I started the bot in my browser from Visual Studio. Then, I opened the emulator and plugged in the URL from the project properties, and cleared out the credentials fields. Next, I started typing in questions that I figured the bot should be able to handle.

picture2

It worked great! I was pretty excited, because this was the first bot I had ever created, and not only did it work, but it also had natural language processing. Very cool stuff.

Now, if you notice in the picture, there are red circles on every reply. It took a while to figure out what was up. As it turns out, the template for the bot includes an older version of the NuGet bot builder library. Once I updated that to the latest version (3.3 at this time), the “Invalid Token” error local IIS was throwing went away.

Be sure to update the bot builder library NuGet package to the latest version.

Publishing and Registering the Bot

Next, it was time to publish it to my Azure account so I could use the Direct Line API from my client app, and also so I could make the bot available via other channels. I used the built-in Visual Studio publish (right click the project, click “Publish”) to put it up there. I had created the Azure Web App in advance.

picture3

Next, I registered the bot on the Bot Framework site. This step is necessary to be able to use the Direct Line API and make the bot visible to other channels. I had some issues getting it to work at first, because I didn’t realize I needed to update the credential information in the web.config of the bot service. The BotId field in the web.config can be most anything. Most tutorials skip telling you what to put in that field, and it doesn’t match up with anything on the portal.

picture4

As you can see, there are a few steps involved in getting the bot published and registered. For the Azure piece, follow the same steps as you would for any Web App. For the bot registration, be sure to follow the instructions carefully, and keep track of your keys, app IDs, and passwords. Take your time the first time you go through the process.

You can see in the previous screen shot that I have a number of errors shown. Those errors were because of that NuGet package version issue mentioned previously. It wasn’t until I had the bot published that I realized there was an error, and went back and debugged it locally.

Testing the Published Bot in Skype

I published and registered the bot primarily to be able to use the Direct Line channel. But it’s a bot, so it makes sense to test it using a few different channels. Skype is a pretty obvious one, and is enabled by default, so I hit that first.

picture5

Through Skype, I was able to verify that it was published and worked as expected.

Using the Direct Line API

When you want to communicate to a bot from code, a good way to do it is using the Direct Line API. This REST API provides an additional layer of authentication and keeps everything within a structured bot framework. Without it, you might as well just make direct REST calls.

First, I needed to enable the Direct Line channel in the bot framework portal. Once I did that, I was able to configure it and get the super-secret key which enables me to connect to the bot. (The disabled field was a pain to try and copy/paste, so I just did a view source, and grabbed the key from the HTML.)

picture6

That’s all I needed to do in the portal. Next, I needed to set up the client to speak to the Direct Line API.

First, I added the Microsoft.Bot.Connector.DirectLine NuGet package to the UWP app. After that, I wrote a pretty small amount of code for the actual communication. Thanks to my colleague, Shen Chauhan (@shenchauhan on Twitter), for providing the boilerplate in his Hunt the Wumpus app.


private const string _botBaseUrl = "(the url to the bot /api/messages)";
private const string _directLineSecret = "(secret from direct line config)";


private DirectLineClient _directLine;
private string _conversationId;


public async Task ConnectAsync()
{
    _directLine = new DirectLineClient(_directLineSecret);

    var conversation = await _directLine.Conversations
            .NewConversationWithHttpMessagesAsync();
    _conversationId = conversation.Body.ConversationId;

    System.Diagnostics.Debug.WriteLine("Bot connection set up.");
}

private async Task<string> GetResponse()
{
    var httpMessages = await _directLine.Conversations
                  .GetMessagesWithHttpMessagesAsync(_conversationId);

    var messages = httpMessages.Body.Messages;

    // our bot only returns a single response, so we won't loop through
    // First message is the question, second message is the response
    if (messages?.Count > 1)
    {
        // select latest message -- the response
        var text = messages[messages.Count-1].Text;
        System.Diagnostics.Debug.WriteLine("Response from bot was: " + text);

        return text;
    }
    else
    {
        System.Diagnostics.Debug.WriteLine("Response from bot was empty.");
        return string.Empty;
    }
}


public async Task<string> TalkToTheUpsideDownAsync(string message)
{
    System.Diagnostics.Debug.WriteLine("Sending bot message");

    var msg = new Message();
    msg.Text = message;


    await _directLine.Conversations.PostMessageAsync(_conversationId, msg);

    return await GetResponse();
}

The client code calls the TalkToTheUpsideDownAsync method, passing in the question. That method fires off the message to the bot, via the Direct Line connection, and then waits for a response.

Because the bot sends only a single message, and only in response to a question, the response comes back as two messages: the first is the message sent from the client, the second is the response from the service. This helps to provide context.

Finally, I wired it to the SendQuestion button on the UI. I also wrapped it in calls to start and stop the MIDI clock, giving us a bit of Stranger Things thinking music while the call is being made and the result displayed on the LEDs.


private async void SendQuestion_Click(object sender, RoutedEventArgs e)
{
    // start music
    StartMidiClock();

    // send question to service
    var response = await _botInterface.TalkToTheUpsideDownAsync(QuestionText.Text);

    // display answer
    await RenderTextAsync(response);

    // stop music
    StopMidiClock();
}

With that, it is 100% complete and ready for demos!

What would I change?

If I were to start this project anew today and had a bit more time, there are a few things I might change.

I like the voice recognition, Bot Framework, and LUIS stuff. Although I could certainly make the conversation more interactive, there’s really nothing I would change there.

On the electronics, I would use a breadboard-friendly Arduino, not hot-glue an Arduino to the back. It pains me to have hot-glued the Arduino to the board, but I was in a hurry and had the glue gun at hand.

I would also use a separate power supply for LEDs. This is especially important if you wish to light more than one LED at a time, as eventually, the Arduino will not be able to support the current draw required by many LED lights.

If I had several weeks, I would have my friends at DF Robot spin a board that I design, rather than use a regular breadboard, or even a solder breadboard. I generally prefer to get boards spun for projects, as they are more robust, and DF Robot can do this for very little cost.

Finally, I would spend more time to find even uglier wallpaper <g>.

Here’s a photo of the wall, packaged up and ready for shipment to Las Vegas (at the time of this writing, it’s in transit), waiting in my driveway. The box was 55” tall, around 42” wide and 7” thick, but only about 25 lbs. It has ¼” plywood on both faces, as well as narrower pieces along the sides. In between the plywood is 2” thick rigid insulating foam. Finally, the corners are protected with the spongier corner form that came with that box.

It costs a stupid amount of money to ship something like that around, but it’s worth it for events. 🙂

picture7

After this, it’s going to Redmond where I’ll record a video walkthrough with Channel 9 during the second week of November.

What Next?

Windows Remote Wiring made this project quite simple to do. I was able to use the tools and languages I love to use (like Visual Studio and C#), but still get the IO of a device like the Arduino Uno. I was also able to use facilities available to a UWP app, and call into a simple bot of my own design. In addition to all that, I was able to use voice recognition and MIDI all in the same app, in a way that made sense.

The Bot Framework and LUIS stuff was all brand new to me, but was really fun to do. Now that I know how to connect app logic to a bot, there will certainly be more interactive projects in the future.

This was a fun project for me. It’s probably my last real maker project of the fall/winter, as I settle into the fall home renovation work and also gear up for the NAMM music event in January. But luckily, there have been many other posts here about Windows 10 IoT Core and our maker and IoT-focused technology. If this topic is interesting to you, I encourage you to take a spin through the archives and check them out.

Whatever gift-giving and receiving holiday you celebrate this winter, be sure to add a few Raspberry Pi 3 devices and some Arduino Uno boards on your list, because there are few things more enjoyable than cozying up to a microcontroller or some IoT code on a cold winter’s day. Oh, and if you steal a strand or two of lights from the tree, I won’t tell. 🙂

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on Twitter @pete_brown

Most of all, thanks for reading!

Read More

Camera APIs with a dash of cloud intelligence in a UWP app (App Dev on Xbox series)

Apps should be able to see, and with that, they should be able to understand the world. In the sixth blog post in the series, we will cover exactly that, how to build UWP apps that take advantage of the camera found on the majority of devices (including the Xbox One with the Kinect) and build a compelling and intelligent experience for the phone, desktop, and the Xbox One. As with the previous blog posts, we are also open sourcing Adventure Works, a photo capture UWP sample app that uses native and cloud APIs to capture, modify and understand images. The source code is available on GitHub right now, so make sure to check it out.

If you missed the previous blog post from last week on Internet of Things, make sure to check it out. We covered how to build a cross-device IoT fitness experience that shines on all device form factors and how to use client and cloud APIs to make a real time connected IoT experience. To read the other blog posts and watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works

image1

Adventure Works is a photo capture UWP sample app that takes advantage of the built in UWP camera APIs for capturing and previewing the camera stream. Using Win2D, an open source library for 2D graphics rendering with GPU acceleration, the app can enhance any photo by appling rich effects or filters, and by using intelligent Cognitive Services API it can analyze any photos to auto tag and caption it appropriately, and more importantly, detect people and emotion.

Camera APIs

Camera and MediaCapture API

The first thing we need to implement is a way to get images into the app. This can be done via a variety of devices; a phone’s forward facing camera, a laptop’s integrated webcam, a USB web cam and even the Kinect’s camera. Fortunately, when using the Universal Windows Platform we don’t have to worry about the low level details of a camera because of the MediaCapture API. Let’s dig into some code on how to get the live camera stream regardless of the Windows 10 device you’re using.

To get started, we’ll need to check what cameras are available to the application and check if any of them are front facing cameras:


var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);

var desiredDevice = allVideoDevices.FirstOrDefault(device =&amp;gt; device.EnclosureLocation != null &amp;amp;&amp;amp; device.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Front);

var cameraDevice = desiredDevice ?? allVideoDevices.FirstOrDefault();

We can query the device using DeviceInformation.FindAllAsync to get a list of all devices that support video capture. What you get back from that Task is a DeviceInformationCollection object. From there you can use LINQ to get the first device in the list that reports being in the front panel.

The next line of code covers the scenario where the devices doesn’t have a front facing camera; in that case it just gets the first camera in the list. This is a good fallback for devices that don’t report being in the panel or the device just doesn’t have a front facing camera.

Now it’s time to initialize MediaCapture APIs using the selected camera.


_mediaCapture = new MediaCapture();

var settings = new MediaCaptureInitializationSettings { VideoDeviceId = _cameraDevice.Id };
await _mediaCapture.InitializeAsync(settings);

To start this stage, instantiate a MediaCapture object (be sure to keep the MediaCapture reference as a class field because you must Dispose when you’re done using it later on). Now we create a MediaCaptureInitializationSettings object and use the camera’s Id to set the VideoDeviceId property. Finally, we can initialize the MediaCapture by passing the settings to the InitializeAsync method.

At this point we can start previewing the camera, but before we do, we’ll need a place for the video stream to be shown in the UI. This is done with a CaptureElement:


&amp;lt;CaptureElement Name=&amp;quot;PreviewControl&amp;quot; Stretch=&amp;quot;UniformToFill&amp;quot;&amp;gt;&amp;lt;/CaptureElement&amp;gt;

The CaptureElement has a Source property; we set that using the MediaCapture and then start the preview:


PreviewControl.Source = _mediaCapture;

await _mediaCapture.StartPreviewAsync();

There are other considerations like device rotation and resolution, which the MediaCapture has easy to use APIs to access and modify those properties of the device and stream. Take a look at the Camera class in Adventure Works for a full implementation.

Effects

Now that we have a video stream, we can do a number of things above and beyond just taking a photo or recording video.  Today, we’ll discuss a few possibilities: applying a photo effect with Win2D, applying real time video effect using Win2D and real time face detection.

Win2D

Win2D is an easy-to-use Windows Runtime API for immediate mode 2D graphics rendering with GPU acceleration. It can be used to apply effects to photos, which is what we do in the Adventure Works demo application after a photo is taken. Let’s take a look at how we accomplish this.

At this point in the app, the user has already taken a photo, the photo is saved in the app’s LocalFolder, and the PhotoPreviewView is shown. The user has chosen to apply some filters by clicking the “Filters” AppBarButton, which shows a GridView with a list of photo effects they can apply.

Okay, now let’s get to the code (note that the code is summarized, checkout the sample app for the full code in context). The PhotoPreviewView has Win2D CanvasControl in main section of the view:


&amp;lt;win2d:CanvasControl x:Name=&amp;quot;ImageCanvas&amp;quot; Draw=&amp;quot;ImageCanvas_Draw&amp;quot;/&amp;gt;

When the preview is intially shown, we load the image from the file into that Canvas. Take note that Invalidate() forces the bitmap to be redrawn:


_file = await StorageFile.GetFileFromPathAsync(photo.Uri);

var stream = await _file.OpenReadAsync();
_canvasImage = await CanvasBitmap.LoadAsync(ImageCanvas, stream);

ImageCanvas.Invalidate();

Now that the UI shows the photo, the user can select an effect from the list. This fires the GridView’s SelectionChanged event and in the event handler we take the user’s selection and set it to a _selectedEffectType field:


private void Collection_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
    _selectedEffectType = (EffectType)e.AddedItems.FirstOrDefault();
    ImageCanvas.Invalidate();
}

Since calling Invalidate forces a redraw, it will hit the following event handler and use the selected effect:


private void ImageCanvas_Draw(CanvasControl sender, CanvasDrawEventArgs args)
{
    var ds = args.DrawingSession;
    var size = sender.Size;
    ds.DrawImageWithEffect(_canvasImage, new Rect(0, 0, size.Width, size.Height),
                           _canvasImage.GetBounds(sender), _selectedEffectType);
}

The DrawImageWithEffect method is an extension method found in EffectsGenerator.cs that takes in a specific EffectType (also defined in EffectsGenerator.cs) and draws the image to the canvas with that effect.


public static void DrawImageWithEffect(this CanvasDrawingSession ds, 
                                       ICanvasImage canvasImage, 
                                       Rect destinationRect, 
                                       Rect sourceRect, 
                                       EffectType effectType)
{
    ICanvasImage effect = canvasImage;

    switch (effectType)
    {
        case EffectType.none:
            effect = canvasImage;
            break;
        case EffectType.amet:
            effect = CreateGrayscaleEffect(canvasImage);
            break;
	 // ...
    }

    ds.DrawImage(effect, destinationRect, sourceRect);
}
private static ICanvasImage CreateGrayscaleEffect(ICanvasImage canvasImage)
{
    var ef = new GrayscaleEffect();
    ef.Source = canvasImage;
    return ef;
}

Win2D provides many different effects that can be applied as input to the built in Draw methods. A simple example is the GrayscaleEffect which simply changes the color of each pixels, but there are also effects that can do transforms and much more.

Win2D Video Effects

You can do a lot with Win2D and the camera. One more advanced scenario is to use Win2D to apply real time video effects to any video stream, including the camera preview stream so that the user can see what the effect looks like before they take the photo. We don’t do this in Adventure Works, but it’s worth touching on. Let’s take a quick look.

Applying a video effect on a video stream starts with a VideoEffectDefinition object. This is passed to the MediaCapture by calling mediaCapture.AddVideoEffectAsync() and passing in that VideoEffectDefinition. Let’s take a simple example, applying a grayscale effect.

First, create a class in a UWP Windows Runtime Component project and add a public sealed class GrayScaleVideoEffect that implement IBasicVideoEffect.


public sealed class GrayscaleVideoEffect : IBasicVideoEffect

The interface requires several methods (you can see all of them here); the one we’ll focus on now is ProcessFrame() where each frame is passed and an output frame is expected. This is where you can use Win2D to apply the same effects to each frame (or analyze the frame for information).

Here’s the code:


public void ProcessFrame(ProcessVideoFrameContext context)
{
    using (CanvasBitmap inputBitmap = CanvasBitmap.CreateFromDirect3D11Surface(_canvasDevice, context.InputFrame.Direct3DSurface))
    using (CanvasRenderTarget renderTarget = CanvasRenderTarget.CreateFromDirect3D11Surface(_canvasDevice, context.OutputFrame.Direct3DSurface))
    using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
    {
        var grayscale = new GrayscaleEffect() { Source = inputBitmap };
        ds.DrawImage(grayscale);
    }
}

Back to the MediaCapture element, to add this effect to the camera preview screen, you need to call the AddVideoEffectAsync:


await _mediaCapture.AddVideoEffectAsync(
new VideoEffectDefinition(typeof(GrayscaleVideoEffect).FullName),
                            MediaStreamType.VideoPreview);

That’s all there is to the effect. You can see a more complete demo of applying Win2D video effect here in the official Win2D samples on GitHub and you can install the Win2D demo app from the Windows Store here.

Face Detection

The VideoEffectDefinition can be used for much more than just applying beautiful image effects. You can also use it to process the frame for information. You can even detect faces using one! Luckily, this VideoEffectDefintion has already been created for you, the FaceDetectionEffectDefinition!

Here’s how to use it (see the full implementation here):


var definition = new Windows.Media.Core.FaceDetectionEffectDefinition();
definition.SynchronousDetectionEnabled = false;
definition.DetectionMode = FaceDetectionMode.HighPerformance;

_faceDetectionEffect = (await _mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview)) as FaceDetectionEffect;

You only need to instantiate the FaceDetectionEffectDefinition, set some of the properties to your needs and then add it to the initialized MediaCapture. The reason we’re taking the extra step of setting the _faceDetectionEffect private field is so that we can spice it up a little more by hooking into the FaceDetected event:


_faceDetectionEffect.FaceDetected += FaceDetectionEffect_FaceDetected;
_faceDetectionEffect.DesiredDetectionInterval = TimeSpan.FromMilliseconds(100);
_faceDetectionEffect.Enabled = true;

Now, whenever that event handler is fired, we can, for example, snap a photo, start recording, or even process the video for more information, like detecting when someone is smiling! We can use the Microsoft Cognitive Services FaceAPI to detect a smile, let’s take a look at this a little further.

Cognitive Services

Microsoft Cognitive Services let you build apps with powerful algorithms based on Machine Learning using just a few lines of code. To use these APIs, you could use the official NuGet packages, or call the REST endpoints directly. In the Adventure Works demo we use three of these to analyze photos: the Emotion API, Face API and Computer Vision API.

Emotion API

Let’s take a look at how we can detect a smile using the Microsoft Services Emotion API. As mentioned above where we showed how to use the FaceDetectionEffectDefinition, we hooked into the FaceDetected event. This is a good spot to check to see if the people in the preview are smiling in real-time and then take the photo at just the right time.

When the FaceDetected event is fired it is passed two parameters: a FaceDetectionEffect sender and a FaceDetectedEventArgs args. We can determine if there is a face available by checking the ResultFrame.DetectedFaces property in the args.

In Adventure Works, when the handler is called (see here for full event handler), first we check if there are any DetectedFaces in the image, and if so, we can greb the location of each face within the frame and call the Emotion API through our custom method, CheckIfEveryoneIsSmiling:


public async Task&amp;lt;bool&amp;gt; CheckIfEveryoneIsSmiling(IRandomAccessStream stream, 
    IEnumerable&amp;lt;DetectedFace&amp;gt; faces, double scale)
{
    List&amp;lt;Rectangle&amp;gt; rectangles = new List&amp;lt;Rectangle&amp;gt;();

    foreach (var face in faces)
    {
        var box = face.FaceBox;
        rectangles.Add(new Rectangle()
        {
            Top = (int)((double)box.Y * scale),
            Left = (int)((double)box.X * scale),
            Height = (int)((double)box.Height * scale),
            Width = (int)((double)box.Width * scale)
        });
    }

    var emotions = await _client.RecognizeAsync(stream.AsStream(), rectangles.ToArray());

    return emotions.Where(emotion =&amp;gt; GetEmotionType(emotion) == EmotionType.Happiness).Count() == emotions.Count();
}

We use the RecognizeAsync method of the EmotionServiceClient to analyze the emotion of each face in the preview frame. We make the assumption that if everyone is happy in the photo they must be smiling.

Face API

Microsoft Cognitive Services Face API allows you to detect, identify, analyze, organize, and tag faces in photos. More specifically, it allows you to detect one or more human faces in an image and get back face rectangles for where in the image the faces are.

We use the API to identify faces in the photo so we can tag each person. When the photo is captured, we analyze the faces by calling our own FindPeople method and passing it the photo file stream:


public async Task&amp;lt;IEnumerable&amp;lt;PhotoFace&amp;gt;&amp;gt; FindPeople(IRandomAccessStream stream)
{
    Face[] faces = null;
    IdentifyResult[] results = null;
    List&amp;lt;PhotoFace&amp;gt; photoFaces = new List&amp;lt;PhotoFace&amp;gt;();

    try
    {
        // find all faces
        faces = await _client.DetectAsync(stream.AsStream());

 results = await _client.IdentifyAsync(_groupId, faces.Select(f =&amp;gt; f.FaceId).ToArray());

        for (var i = 0; i &amp;lt; faces.Length; i++)
        {
            var face = faces[i];
            var photoFace = new PhotoFace()
            {
                Rect = face.FaceRectangle,
                Identified = false
            };

            if (results != null)
            {
                var result = results[i];
                if (result.Candidates.Length &amp;gt; 0)
                {
                    photoFace.PersonId = result.Candidates[0].PersonId;
                    photoFace.Name = _personList.Where(p =&amp;gt; p.PersonId == result.Candidates[0].PersonId).FirstOrDefault()?.Name;
                    photoFace.Identified = true;
                }
            }

            photoFaces.Add(photoFace);
        }
    }
    catch (FaceAPIException ex)
    {
    
    }

    return photoFaces;
} 

The FaceServiceClient API contains several methods that allow us to easily call into the Face API in Cognitive Services. DetectAsync allows us to see if there are any faces in the captured frame, as well as their bounding box within the image. This is great for locating the face of a person in the image so you can draw their name (or something else more fun). The IdentifyAsync method can use the faces found in the DetectAsync method to identify known faces and get their name (or id for more unique identification).

Not shown here is the AddPersonFaceAsync method of the FaceServiceClient API which can be used to improve the recognition of a specific person by sending another image for that person to train the model better. And to create a new person if that person has not been added to the model, we can use the CreatePersonAsync method. To see how all of these methods work together in the Adventure Works sample, take a look at FaceAPI.cs on Github.

Computer Vision API

You can take this much further by implementing the Microsoft Services Computer Vision API and get information from the photo. Again, let’s go back to PhotoPreviewView in the Adventure Works demo app. If the user clicks on the Details button, we call the AnalyzeImage method where we pass the photo’s file stream to the VisionServiceClient AnalyzeImageAsync method and specify the VisualFeatures that we expect in return. It will analyze the image and return a list of tags describing what the API detected in the photo, a short description of the image, detected faces, and more (see the full implementation on GitHub).


private async Task AnalyzeImage()
{
    var stream = await _file.OpenReadAsync();

    var imageResults = await _visionServiceClient.AnalyzeImageAsync(stream.AsStream(),
                new[] { VisualFeature.Tags, VisualFeature.Description,
                        VisualFeature.Faces, VisualFeature.ImageType }); 
    foreach (var tag in imageResults.Tags)
    {
        // Take first item and use it as the main photo description
        // and add the rest to a list to show in the UI
    }
}

Wrap up

Now that you are familiar with the general use of the APIs, make sure to check out the app source on our official GitHub repository, read through some of the resources provided, watch the event if you missed it, and let us know what you think through the comments below or on twitter.

And come back next week for another blog post in the series where we will extend the Adventure Works example with some social features through enabling Facebook and Twitter login and sharing, integrating project Rome, and adding Maps and location.

Until then, happy coding!

Resources

Previous Xbox Series Posts

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Read More

Building Secure Apps for Windows IoT Core

Secure communication in the cloud normally involves the following:

  • Data encryption: hiding what is sent
  • Data integrity: protecting data from being tampered with
  • Authentication: validating the identity of the parties in the communication

Using a cryptographic protocol such as the TLS takes care of data encryption and integrity, and also allows the client to validate the identity of the server by validating its digital certificate.

For IoT devices, validating the identity of the client presents a unique challenge. Unlike traditional consumer devices such as PCs and phones, IoT devices are typically not operated by humans who can enter a password, recognize a picture or solve a CAPTCHA.

In this post, we will look at how to write apps for Windows IoT Core that can authenticate to Azure, while protecting the security-sensitive information on the device.

TPM: Enterprise-Grade Security for Small Devices

Storing secure information, such as a password or a certificate, on a device could make it vulnerable to exposure. A leaked password is a surefire way to compromise the security of a device or an entire system. Human operators take pains to avoid divulging secret information and IoT devices should too, but they must do it better than humans.

In the Windows family, the technology that underpins the security of the OS – the Trusted Platform Module (TPM) – is also available on Windows IoT Core and can be used to secure IoT devices.

At a very high level, a TPM device is a microcontroller that can store data and perform computations. It can be either a discrete chip soldered to a computer’s motherboard, or a module integrated into the SoC by the manufacturer – an approach particularly well suited for small devices.

Inside the TPM

A key capability of the TPM is its write-only memory. If the data cannot be read once written, you might question how can it be useful. This is where TPM’s compute capability comes in – even though the security-sensitive data cannot be read back, TPM can compute a cryptographic hash, such as the HMAC, based on that data. It’s impossible to uncover the secret given the hash, but if the secret is known to both parties of communication, it is possible to determine whether the hash received from another party was produced from that secret.

This is the basic idea behind using cryptographic keys: the secret – called the shared access key – is established and shared between the IoT device and the cloud during the device provisioning process. From that point on, an HMAC derived from the secret will be used to authenticate the IoT device.

Device Provisioning

The tool that provisions Windows IoT Core devices is called the IoT Core Dashboard, and can be downloaded here.

The dashboard produces an image of the OS and securely connects your device to Azure by associating the physical device with the device Id in the Azure IoT Hub, and imprinting the device-specific shared access key to the devices’ TPM.

For devices that don’t have a TPM chip, the tool can install a software-emulated TPM that, while providing no security, allows you to use the same programming model as the one used for the hardware TPM. This way you can develop your app using a maker device (such as Raspberry Pi 2 or 3) and have security “light up” on a device with the hardware TPM, without having to change the app.

To connect your device to Azure, click on the “Connect to Azure” tab:

image 1

You will be asked to log in to your Azure account (you can get a free trial subscription here if you don’t have one already), pick the desired instance of Azure IoT Hub and associate your physical device with it.

If you don’t have any IoT Hub instances in your Azure subscription, the tool will let you create a free instance. You shouldn’t worry about accidentally running up a bill in your subscription – unless you explicitly ask for it, the dashboard will not create any paid services on your behalf.

Once you have selected the IoT Hub and the device ID to associate your device with, you can imprint the shared access key of that device on your TPM:

image 2

Reconfiguring the Device

Normally you would only use the dashboard for configuring your device for the first time. What if you need to reconfigure your device later? In that case, connect to your device using the Windows Device Portal and open the “TPM configuration” tab.

The Portal allows you to configure additional TPM properties, such as the logical device Id – this way your device can have several identities, which can be useful if you are running different apps that connect to Azure on behalf of different device Ids.

You can find more information on configuring TPM through the Windows Device Portal here.

Show Me The Code!

We have previously described how to connect to Azure using the device credentials retrieved from the IoT Hub. All of this continues to work, but now you can also use a new approach that doesn’t involve keeping the connection string in the app. This means that the same app can work on any provisioned device. You don’t have to insert the connection string in the code after checking it out from source control. No need to remember to remove it before checking the code back in.

When your app runs on a provisioned device, it can extract the device-specific connectivity information at runtime from the TPM.

To connect to Azure IoT Hub from a provisioned device, use the TpmDevice class from the Microsoft.Devices.Tpm library (available as the NuGet package). Get the device information stored in the desired slot (typically slot 0), then retrieve the name of the IoT Hub, the device Id and the SAS token (the string containing the HMAC produced from the shared access key) and use that to create the DeviceClient:

CODE

At this point, you have a connected deviceClient object that you can use to send and receive messages.

The full working sample is available here.

Where Do We Go from Here?

Clearly, security doesn’t end here. Having the token stored in the TPM means that it cannot be extracted or cloned – a bad guy cannot “put on the internet”. However, if the physical access to the device is unprotected, an attacker can install a malicious app on it. Likewise, a discreet TPM can be removed and plugged into another device running an unauthorized app.

If an important process decision depends on the data coming from a sensor (say, a thermometer) attached to the device, an attacker with a blow torch can make it produce wrong data.

Finally, if the health of your system depends on the condition of a particular device, it can be compromised by a low-tech attacker with a sledgehammer.

With all that in mind, using the TPM technology for storing device credentials is an important step towards productizing your IoT solution. TPM allows you to write secure code, so you can be assured that your cloud resources are only consumed by authorized devices and you can trust the data coming from these devices – if access to them has been physically protected.

Download Visual Studio to get started!

Post written by Artur Laksberg, a principal software engineer in Windows

Read More

Microsoft grants help kids learn computer science, Earth Day is celebrated and influential engineer is honored — Weekend Reading: April 22 edition

From a huge effort to help kids realize their potential to a celebration of our dear old planet, this week brought plenty of interesting and inspiring news around Microsoft. We’ve rounded up some of the highlights in this latest edition of Weekend Reading.

Earlier this week, Microsoft announced grants to 100 nonprofit partners in 55 countries as part of YouthSpark, a global initiative to increase access for young people to learn computer science. In turn, these nonprofit partners — such as Laboratoria, CoderDojo and City Year — will use the power of local schools, businesses and community organizations to empower students to achieve more for themselves, their families and their communities.

The nonprofits will build upon the work that Microsoft already has underway through programs like Hour of Code with Code.org, BBC micro:bit and TEALS.

Every young person should have an opportunity, a spark, to realize a more promising future,” Mary Snapp, corporate vice president and head of Microsoft Philanthropies, wrote in a blog post on Wednesday. “Together with our nonprofit partners, we are excited to take a bold step toward that goal today.”

WR Youthspark image

Wondering what the next wave of breakthrough technology will be? Harry Shum, executive vice president of Microsoft Technology and Research, calls it an “invisible revolution,” and it’s transforming farming, allowing people from different cultures to communicate, helping people breathe healthier air, preventing disease outbreaks and much more.

“We are on the cusp of creating a world in which technology is increasingly pervasive but is also increasingly invisible,” Shum said.

This week on the Microsoft Facebook page, we joined the invisible revolution to preview the latest, most cutting-edge developments in artificial intelligence, machine learning and cloud computing. The possibilities are endless.

Invisible revolution GIF

Computer industry luminaries honored Dave Cutler, a Microsoft senior technical fellow whose impressive body of work spans five decades, as a Computer History Museum Fellow. The 74-year-old has shaped entire eras. He worked to develop the VMS operating system for Digital Equipment Corporation in the late 1970s, had a central role in the development of Windows NT — the basis for all major versions of Windows since 1993 — and helped develop the Microsoft Azure cloud operating system and the hypervisor for Xbox One that allows the console to be more than just for gaming.

“The Fellow awards recognize people who’ve had a tremendous impact on our lives, on our culture, on the way we work, exchange information and live,” said John Hollar, the museum’s president and CEO. “People like Dave Cutler, who probably influences the computing experiences of more than 2 billion people, yet isn’t known in a way he deserves to be, in proportion to the impact he’s had on the world.”

WR Engineer award

Microsoft Philanthropies sponsored the annual We Day, supporting exciting events Wednesday in Seattle and earlier this month in Los Angeles. Nearly 30,000 attended the shows, which celebrate young people who are making a difference.

In supporting We Day, Microsoft aims to help young people drive the change they would like to see in their neighborhoods, schools and communities. Our photo gallery captures the highlights, famous faces and young people who were involved in this year’s events.

WR_We day

In advance of Earth Day on Friday, Microsoft kicked off this week with inspiration and information about the company’s sustainability programs and initiatives, including ways you can take part in the efforts. The  brand new Environmental Sustainability at Microsoft website details how Microsoft’s company-wide carbon fee have financed significant investments in renewable energy to power its data centers, improved building efficiency and reached more than 6 million people through the purchase of carbon offsets from community projects around the world.

Microsoft, which has been a carbon-neutral company since 2012, is continually finding ways to make its products and their lifecycles more earth-friendly. Learn more about how Microsoft is commemorating Earth Day on the Microsoft Green Blog.

WR_earth day

Microsoft is also constantly working to help students achieve more. Some all-new education features coming in the Windows 10 Anniversary Update are specifically inspired by teachers and focused on students. A “Set Up School PCs” app lets teachers set up a device themselves in mere minutes, and a new “Take a Test” provides simple and secure standardized testing for classrooms or entire schools.

Learning will also get a big boost with Microsoft Classroom and Microsoft Forms, a OneNote Class Notebook that now has Learning Management System (LMS) integration and — perhaps most exciting to students — the dawn of “Minecraft: Education Edition.” Educators will be able to give it a test run in the summer months and provide feedback and suggestions.

In apps this week, the powerful mobile photo-editing app PicsArt is marking Earth Day by offering a series of green- and outdoorsy-themed photo frame and clip art packages. Several are exclusive to Windows customers. The PicsArt app is free in the Windows Store.

Need a little help juggling projects, priorities and other moving parts in your busy life? The Todoist Windows 10 app can help you stay organized, collaborate with colleagues and even empty your inbox by turning important emails into tasks.

Or for a little fun this weekend, go way beyond retro to prehistoric days in “Age of Cavemen.” In this multiplayer strategy game, you’re the village chief in a dangerous world, and you need to keep your people safe. Build an army, create alliances and destroy your opponents in a wild and wooly free-for-all.

WR apps image

And that’s a wrap for this edition of Weekend Reading. See you here next week for the latest roundup.

Posted by Tracy Ith
Microsoft News Center Staff

The post Microsoft grants help kids learn computer science, Earth Day is celebrated and influential engineer is honored — Weekend Reading: April 22 edition appeared first on The Official Microsoft Blog.

Read More

In the cloud we trust

Jan. 21, 2015 was an interesting day.

It was the day Microsoft invited a small group of tech journalists to its Redmond, Washington, headquarters for a big announcement. In the past, news this momentous might have been accompanied by a famous rock band or fireworks, but when Microsoft unveiled Windows 10 and HoloLens, they were in an intimate, coffeehouse-style room sipping coffee with house music playing in the background.

Elsewhere the world, Jan. 21 was an altogether different kind of day. As the sun rose over Sao Paolo, it promised to be a beautiful day – a day like any other. And then the police arrived at the apartment of a Microsoft executive in Brazil, bursting past the gates to his door, demanding he be produced so he could appear before a court.

But Jan. 21 wasn’t the weightiest day that month – not by far. Two weeks earlier, on Jan. 7, the world was transfixed as a series of horrifying events unfolded in Paris. That is the day a pair of brothers launched an attack on Charlie Hebdo, a weekly French satirical magazine, that left 11 of its employees dead and as many others injured – all solely because they had expressed their views.

These two extraordinary days in January are connected by an increasingly crucial issue in our world: information security.

Read the full story.

Read More

In the cloud we trust

Jan. 21, 2015 was an interesting day.

It was the day Microsoft invited a small group of tech journalists to its Redmond, Washington, headquarters for a big announcement. In the past, news this momentous might have been accompanied by a famous rock band or fireworks, but when Microsoft unveiled Windows 10 and HoloLens, they were in an intimate, coffeehouse-style room sipping coffee with house music playing in the background.

Elsewhere the world, Jan. 21 was an altogether different kind of day. As the sun rose over Sao Paolo, it promised to be a beautiful day – a day like any other. And then the police arrived at the apartment of a Microsoft executive in Brazil, bursting past the gates to his door, demanding he be produced so he could appear before a court.

But Jan. 21 wasn’t the weightiest day that month – not by far. Two weeks earlier, on Jan. 7, the world was transfixed as a series of horrifying events unfolded in Paris. That is the day a pair of brothers launched an attack on Charlie Hebdo, a weekly French satirical magazine, that left 11 of its employees dead and as many others injured – all solely because they had expressed their views.

These two extraordinary days in January are connected by an increasingly crucial issue in our world: information security.

Read the full story.

Read More

In the cloud we trust

Jan. 21, 2015 was an interesting day.

It was the day Microsoft invited a small group of tech journalists to its Redmond, Washington, headquarters for a big announcement. In the past, news this momentous might have been accompanied by a famous rock band or fireworks, but when Microsoft unveiled Windows 10 and HoloLens, they were in an intimate, coffeehouse-style room sipping coffee with house music playing in the background.

Elsewhere the world, Jan. 21 was an altogether different kind of day. As the sun rose over Sao Paolo, it promised to be a beautiful day – a day like any other. And then the police arrived at the apartment of a Microsoft executive in Brazil, bursting past the gates to his door, demanding he be produced so he could appear before a court.

But Jan. 21 wasn’t the weightiest day that month – not by far. Two weeks earlier, on Jan. 7, the world was transfixed as a series of horrifying events unfolded in Paris. That is the day a pair of brothers launched an attack on Charlie Hebdo, a weekly French satirical magazine, that left 11 of its employees dead and as many others injured – all solely because they had expressed their views.

These two extraordinary days in January are connected by an increasingly crucial issue in our world: information security.

Read the full story.

Read More

Microsoft expands IT training for active-duty US service members, ‘Halo 5: Guardians’ breaks records – Weekend Reading: Nov. 6 edition

It was a good week for Master Chief, and for U.S. service members seeking to master IT skills to help them transition from military to civilian life. Let’s get to it!

The Microsoft Software & Systems Academy (MSSA) is expanding from three locations to nine, and will be servicing 12 military installations. The MSSA program uses a service member’s time prior to transitioning out of the service to train him or her in specialized technology management areas like server cloud/database, business intelligence and software development. After successfully completing the program, participants have an interview for a full-time job at Microsoft or one of its hiring partners. “On this Veterans Day 2015, it’s the responsibility of the IT industry to honor those who have served with more than an artillery salute and a brief word of thanks,” says Chris Cortez, vice president of Military Affairs at Microsoft, and retired U.S. Marine Corps major general. “We are compelled to set an example of what it can look like to dig in with our transitioning service members as they prepare to cross the bridge to the civilian world.”

A week after launching worldwide, “Halo 5: Guardians” broke records as biggest Halo launch ever and the fastest-selling Xbox One exclusive game to-date, with more than $400 million in global sales of “Halo 5: Guardians” games and hardware. The “Halo 5: Live” launch celebration also earned a Guinness World Records title for the most-watched video game launch broadcast, with more than 330,000 unique streams on the evening of the broadcast.

Halo 5: Guardians, launch, New York City

In China, millions of people are carrying on casual conversations with a Microsoft technology called XiaoIce. Hsiao-Wuen Hon, corporate vice president in charge of Microsoft Research Asia, sees XiaoIce as an example of the vast potential that artificial intelligence holds — not to replace human tasks and experiences, but rather to augment them, writes Allison Linn. Hon recently joined some of the world’s leading computer scientists at the 21st Century Computing Conference in Beijing, an annual meeting of researchers and computer science students, to discuss some emerging trends.

MSR, China, AI, artificial intelligence

Microsoft and Red Hat announced a partnership that will help customers embrace hybrid cloud computing by providing greater choice and flexibility deploying Red Hat solutions on Microsoft Azure. Also announced: Microsoft acquired Mobile Data Labs, creator of the popular MileIQ app, which takes advantage of sensors in modern mobile devices to automatically and contextually capture, log and calculate business miles, allowing users to confidently claim tax deductions. The acquisition is the latest example of Microsoft’s ambition to reinvent productivity and business process in a mobile-first, cloud-first world, says Rajesh Jha, corporate vice president for Outlook and Office 365.

MileIQ

We got to know some pretty cool people doing really cool things. Among them: The team members of Loop who created the Arrow and Next Lock Screen apps through the Microsoft Garage. We also were introduced to Scott McBride, a Navy vet whose internship at Microsoft led to a full-time job; he’s now a business program manager for Microsoft’s Cloud and Enterprise group. McBride will be helping Microsoft recruit new hires this fall.

Microsoft Garage, Loop Team, apps

Microsoft Loop team photographed in their new workspace, under construction in Bellevue, Washington. (Photography by Scott Eklund/Red Box Pictures)

A game with a deceptively simple, one-word title, “Prune,” is the App of the Week. In it, you give life to a forgotten landscape, and uncover a story that’s hidden deep beneath the soil. You’ll cultivate a sapling into a full-grown tree, and watch it evolve in an elegant but sparse environment. It’s up to you to bring the tree toward the sunlight, or shield it from the dangers of a hostile world. You can install “Prune” for $3.99 from the Windows Store.

Prune, games, Windows

“Prune”

This week on the Microsoft Instagram channel, we met Thavius Beck. Beyond being a musician, Thavius is a performer, producer and teacher. He uses his Surface Book to spread his love of music and perform in completely new ways.

Instagram, Surface Book

Thanks for reading! Have a good weekend, and we’ll see you back here next Friday!

Posted by Suzanne Choney
Microsoft News Center Staff

Read More