Tag Archives: Azure

Powering the industry 4.0 revolution in manufacturing with Windows 10 and Microsoft Cloud

Since the beginning of the industrial age, manufacturers have been updating and improving the products we use every day. As those product lifecycles get shorter, manufacturers rely on companies like Microsoft to keep pace. That’s why Microsoft is focused on providing the technology solutions and services, infrastructure and robust security offerings that allow manufacturers to keep up with their end customer needs.

I’m excited to be at Hannover Messe 2017 in Germany this week, showing partner and customer solutions which empower manufacturers to be more efficient, secure, and collaborative using new intelligent and IoT technologies.

Microsoft and its partners and customers will showcase four solutions including how humans and machines work together to help alleviate risks in potentially hazardous work environments. The four solutions use Microsoft technologies such as Windows 10, Azure, Microsoft HoloLens and Surface.

Refining manufacturing with Mixed Reality

Taqtile, a leading 3D app development firm and Microsoft partner, is building cutting edge mixed reality based solutions on Windows 10 and Microsoft HoloLens to improve productivity and security in the workplace. Today, many industries such as field service work, manufacturing, and warehousing are experiencing a plateau in productivity growth as the shortage of skilled manufacturing workers shrinks while manufacturing job openings rise. As a result, we’re seeing a widening gap between the current labor force and requirements to work in these fields, which mixed reality technology can help address.

Together with Microsoft, Taqtile has developed a new Field Inspection app which is delivered through the power of Windows 10, HoloLens and Microsoft Cloud to deliver integrated 3D modeling, predictive maintenance, and inspection for the manufacturing, utilities, transportation and oil and gas industries. These industries can now better train less experienced workers out in the field as a remote expert can visually communicate step-by-step instructions.

For instance, in the oil and gas industry, a worker can quickly put on a HoloLens and search through some of the steps to fix an issue while in the field as provided by their more experienced colleagues via the Field Inspection Windows 10 app. A manager can then get an up-to-date view of their rig environment on a Windows 10 dashboard.

Together with Microsoft, Taqtile has developed a new Field Inspection app which is delivered through the power of Windows 10, HoloLens and Microsoft Cloud.

Together with Microsoft, Taqtile has developed a new Field Inspection app which is delivered through the power of Windows 10, HoloLens and Microsoft Cloud.

In another innovative use of mixed reality, global electronics manufacturing services leader, Jabil, and its consulting arm, Radius Innovation & Development, are showing how Windows 10 is improving the health care industry. The interactive display will highlight how IoT-enabled product design is giving providers a better way to design infusion devices, which doctors rely on to administer dosages of vital fluids like nutrients and medications to patients.

Using design software on Microsoft Surface Studio and the visualization capabilities of Microsoft HoloLens, the companies are armed with tools that improve collaboration and innovation in the manufacturing process. Holograms and 3D capabilities allow for product manipulation and accelerate group collaboration and design refinement prior to a product’s introduction on the factory floor.

Using design software on Microsoft Surface Studio and the visualization capabilities of Microsoft HoloLens, companies are armed with tools that improve collaboration and innovation in the manufacturing process.

Using design software on Microsoft Surface Studio and the visualization capabilities of Microsoft HoloLens, companies are armed with tools that improve collaboration and innovation in the manufacturing process.

Intuitive and transparent communication for a more productive and hands-free work day

Trekstor, a leading OEM offering a wide range of products in the areas of electronic entertainment and information technology, is introducing a new Windows 10 IoT Core based B2B commercial-grade wearable for the Industry. The TrekStor IoT wearable is cloud connected, can run Universal Windows Applications. It is as secure and manageable like any other Windows device and leverages Microsoft Azure Cloud services like Microsoft Cognitive Services.

The compact and intuitive wearable can replace a larger hand-held device in multiple line-of-business scenarios including:

  • Inventory management in retail
  • Building automation for guest services in hospitality
  • Industrial automation in manufacturing
  • Patient care in healthcare
  • And several cross-industry scenarios like asset management, fleet management, and others.

The TrekStor IoT Wearable allows various functions to be reported to the right place without any communicative detours, and messages can be transmitted silently in real-time or verbally via voice messages.

The 1.54 inch TrekStor IoT wearable is both Wi-Fi and Bluetooth enabled, has plenty of storage space and has the processing power and battery life to perform its line-of-business functions. The wearable can also survive a hard day at work without scratches thanks to the soft casing and Gorilla Glass 3.

More information on this product will be available in the coming months.

The TrekStor IoT wearable is powerful and persistent with battery life that survives a long working day.

The TrekStor IoT wearable is powerful and persistent with battery life that survives a long working day.

Robots helping make work safer

With more than 8 million people injured on the job each year in the U.S. alone, there is demand for robots that can carry out tasks that are too dangerous or difficult for humans. These robots are expected to make critical contributions in industries, including construction, manufacturing, oil and gas, mining, infrastructure inspection, logistics, public safety, and military.

Sarcos Robotics is a global leader in the development of dexterous industrial robots for use in unstructured and unpredictable environments that augment humans, not replace them. Its Guardian line of robots significantly reduces the risk of injury and the cost of performing many of the world’s most dangerous and difficult tasks. Sarcos is collaborating with Microsoft to provide customers a Robot-as-a-Service offering, using Microsoft Cognitive Services, Azure IoT Suite, and leverages Windows 10 for the tablet controller, to create a unique opportunity to fundamentally transform the safety and efficiency of many industrial tasks around the world.

For example, the Sarcos Guardian S robot is intuitive to use and can be tele-operated from miles away using a Windows 10 tablet to reliably traverse challenging terrain including passing through narrow culverts and pipes or scale the inside and outside of storage tanks, pipes, maritime vessels, vehicles and other vertical surfaces. The robot uses a full suite of sensors and facilitates two-way real-time video and voice communications with Azure ML and Windows 10 to detect anomalies within the tanks and learn from service technicians over time.

The Sarcos Guardian Scan be tele-ported from miles away from a Windows 10 tablet to reliably traverse challenging terrain.

The Sarcos Guardian Scan be teleported from miles away from a Windows 10 tablet to reliably traverse challenging terrain.

Microsoft offers end-to-end secure IoT solutions from Windows 10 devices to Azure cloud. Companies like Trekstor are showcasing innovative potential with its Windows IoT Core based TrekStor wearable, while Sarcos takes advantage of Azure IoT to offer innovative solutions that solve manufacturing business challenges. Some examples include how to receive real-time KPI tracking while both hands are occupied in production work or how to execute dangerous tasks and keep employees safe.

Visit the Microsoft booth at Hannover Messe 2017 to check out these and other great customer and partner solutions which are revolutionizing the industry and enabling manufacturers to be more efficient, secure and collaborative. To learn more about all the customer and partner solutions being showcased at Hannover Messe 2017, check out the Official Microsoft Blog.

Read More

The “Internet of Stranger Things” Wall, Part 3 – Voice Recognition and Intelligence


I called this project the “Internet of Stranger Things,” but so far, there hasn’t been an internet piece. In addition, there really hasn’t been anything that couldn’t be easily accomplished on an Arduino or a Raspberry Pi. I wanted this demo to have more moving parts to improve the experience and also demonstrate some cool technology.

First is voice recognition. Proper voice recognition typically takes a pretty decent computer and a good OS. This isn’t something you’d generally do on an Arduino alone; it’s simply not designed for that kind of workload.

Next, I wanted to wire it up to the cloud, specifically to a bot. The interaction in the show is a conversation between two people, so this was a natural fit. Speaking of “natural,” I wanted the bot to understand many different forms of the questions, not just a few hard-coded questions. For that, I wanted to use the Language Understanding Intelligent Service (LUIS) to handle the parsing.

This third and final post covers:

  • Adding Windows Voice Recognition to the UWP app
  • Creating the natural language model in LUIS
  • Building the Bot Framework Bot
  • Tying it all together

You can find the other posts here:

If you’re not familiar with the wall, please go back and read part one now. In that, I describe the inspiration for this project, as well as the electronics required.

Adding Voice Recognition

In the TV show, Joyce doesn’t type her queries into a 1980s era terminal to speak with her son; she speaks aloud in her living room. I wanted to have something similar for this app, and the built-in voice recognition was a natural fit.

Voice recognition in Windows 10 UWP apps is super-simple to use. You have the option of using the built-in UI, which is nice but may not fit your app style, or simply letting the recognition happen while you handle events.

There are good samples for this in the Windows 10 UWP Samples repo, so I won’t go into great detail here. But I do want to show you the code.

To keep the code simple, I used two recognizers. One is for basic local echo testing, especially useful if connectivity in a venue is unreliable. The second is for sending to the bot. You could use a single recognizer and then just check some sort of app state in the events to decide if you were doing something for local echo or for the bot.

First, I initialized the two recognizers and wired up the two events that I care about in this scenario.

SpeechRecognizer _echoSpeechRecognizer;
SpeechRecognizer _questionSpeechRecognizer;

private async void SetupSpeechRecognizer()
    _echoSpeechRecognizer = new SpeechRecognizer();
    _questionSpeechRecognizer = new SpeechRecognizer();

    await _echoSpeechRecognizer.CompileConstraintsAsync();
    await _questionSpeechRecognizer.CompileConstraintsAsync();

    _echoSpeechRecognizer.HypothesisGenerated +=
    _echoSpeechRecognizer.StateChanged += 

    _questionSpeechRecognizer.HypothesisGenerated +=
    _questionSpeechRecognizer.StateChanged += 


The HypothesisGenerated event lets me show real-time recognition results, much like when you use Cortana voice recognition on your PC or phone. In that event handler, I just display the results. The only real purpose of this is to show that some recognition is happening in a way similar to how Cortana shows that she’s listening and parsing your words. Note that the hypothesis and the state events come back on a non-UI thread, so you’ll need to dispatch them like I did here.

private async void OnEchoSpeechRecognizerHypothesisGenerated(
        SpeechRecognizer sender,
        SpeechRecognitionHypothesisGeneratedEventArgs args)
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
        EchoText.Text = args.Hypothesis.Text;

The next is the StateChanged event. This lets me alter the UI based on what is happening. There are lots of good practices here, but I took an expedient route and simply changed the background color of the text box. You might consider running an animation on the microphone or something when recognition is happening.

private SolidColorBrush _micListeningBrush = 
                     new SolidColorBrush(Colors.SkyBlue);
private SolidColorBrush _micIdleBrush = 
                     new SolidColorBrush(Colors.White);

private async void OnEchoSpeechRecognizerStateChanged(
        SpeechRecognizer sender, 
        SpeechRecognizerStateChangedEventArgs args)
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
        switch (args.State)
            case SpeechRecognizerState.Idle:
                EchoText.Background = _micIdleBrush;

                EchoText.Background = _micListeningBrush;

I have equivalent handlers for the two events for the “ask a question” speech recognizer as well.

Finally, some easy code in the button click handler kicks off recognition.

private async void DictateEcho_Click(object sender, RoutedEventArgs e)
    var result = await _echoSpeechRecognizer.RecognizeAsync();

    EchoText.Text = result.Text;

The end result looks and behaves well. The voice recognition is really good.


So now we can talk to the board from the UWP PC app, and we can talk to the app using voice. Time to add just a little intelligence behind it all.

Creating the Natural Language Model in LUIS

The backing for the wall is a bot in the cloud. I wanted the bot to be able to answer questions, but I didn’t want to have the exact text of the question hard-coded in the bot. If I wanted to hard-code them, a simple web service or even local code would do.

What I really want is the ability to ask questions using natural language, and map those questions (or Utterances as called in LUIS) to specific master questions (or Intents in LUIS). In that way, I can ask the questions a few different ways, but still get back an answer that makes sense. My colleague, Ryan Volum, helped me figure out how LUIS worked. You should check out his Getting Started with Bots Microsoft Virtual Academy course.

So I started thinking about the types of questions I wanted answered, and the various ways I might ask them.

For example, when I want to know the location of where Will is, I could ask, “Where are you hiding?” or “Tell me where you are!” or “Where can I find you?” When checking to see if someone is listening, I might ask, “Are you there?” or “Can you hear me?” As you can imagine, hard-coding all these variations would be tedious, and would certainly miss out on ways someone else might ask the question.

I then created those in LUIS with each master question as an Intent, and each way I could think of asking that question then trained as an utterance mapped to that intent. Generally, the more utterances I add, the better the model becomes.


The above screen shot is not the entire list of Intents; I added a number of other Intents and continued to train the model.

For a scenario such as this, training LUIS is straight forward. My particular requirements didn’t include any entities or Regex, or any connections to a document database or Azure search. If you have a more complex dialog, there’s a ton of power in LUIS to be able to make the model as robust as you need, and to also train it with errors and utterances found in actual use. If you want to learn more about LUIS, I recommend watching Module 5 in the Getting Started with Bots MVA.

Once my LUIS model was set up and working, I needed to connect it to the bot.

Building the Bot Framework Bot

The bot itself was the last thing I added to the wall. In fact, in my first demo of the wall, I had to type the messages in to the app instead of sending it out to a bot. Interesting, but not exactly what I was looking for.

I used the generic Bot Framework template and instructions from the Bot Framework developer site. This creates a generic bot, a simple C# web service controller, which echoes back anything you send it.

Next, following the Bot Framework documentation, I integrated LUIS into the bot. First, I created the class which derived from LuisDialog, and added in code to handle the different intents. Note that this model is changing over time; there are other ways to handle the intents using recognizers. For my use, however, this approach worked just fine.

The answers from the bot are very short, and I keep no context. Responses from the Upside Down need to be short enough to light up on the wall without putting everyone to sleep reading a long dissertation letter by letter.

namespace TheUpsideDown
    // Reference: 
    // https://docs.botframework.com/en-us/csharp/builder/sdkreference/dialogs.html

    // Partial class is excluded from project. It contains keys:
    // [Serializable]
    // [LuisModel("model id", "subscription key")]
    // public partial class UpsideDownDialog
    // {
    // }
    public partial class UpsideDownDialog : LuisDialog<object>
        // None
        public async Task None(IDialogContext context, LuisResult result)
            string message = $"Eh";
            await context.PostAsync(message);

        public async Task CheckPresence(IDialogContext context, LuisResult result)
            string message = $"Yes";
            await context.PostAsync(message);

        public async Task AskName(IDialogContext context, LuisResult result)
            string message = $"Will";
            await context.PostAsync(message);

        public async Task FavoriteColor(IDialogContext context, LuisResult result)
            string message = $"Blue ... no Gr..ahhhhh";
            await context.PostAsync(message);

        public async Task WhatIShouldDoNow(IDialogContext context, LuisResult result)
            string message = $"Run";
            await context.PostAsync(message);



Once I had that in place, it was time to test. The easiest way to test before deployment is to use the Bot Framework Channel Emulator.

First, I started the bot in my browser from Visual Studio. Then, I opened the emulator and plugged in the URL from the project properties, and cleared out the credentials fields. Next, I started typing in questions that I figured the bot should be able to handle.


It worked great! I was pretty excited, because this was the first bot I had ever created, and not only did it work, but it also had natural language processing. Very cool stuff.

Now, if you notice in the picture, there are red circles on every reply. It took a while to figure out what was up. As it turns out, the template for the bot includes an older version of the NuGet bot builder library. Once I updated that to the latest version (3.3 at this time), the “Invalid Token” error local IIS was throwing went away.

Be sure to update the bot builder library NuGet package to the latest version.

Publishing and Registering the Bot

Next, it was time to publish it to my Azure account so I could use the Direct Line API from my client app, and also so I could make the bot available via other channels. I used the built-in Visual Studio publish (right click the project, click “Publish”) to put it up there. I had created the Azure Web App in advance.


Next, I registered the bot on the Bot Framework site. This step is necessary to be able to use the Direct Line API and make the bot visible to other channels. I had some issues getting it to work at first, because I didn’t realize I needed to update the credential information in the web.config of the bot service. The BotId field in the web.config can be most anything. Most tutorials skip telling you what to put in that field, and it doesn’t match up with anything on the portal.


As you can see, there are a few steps involved in getting the bot published and registered. For the Azure piece, follow the same steps as you would for any Web App. For the bot registration, be sure to follow the instructions carefully, and keep track of your keys, app IDs, and passwords. Take your time the first time you go through the process.

You can see in the previous screen shot that I have a number of errors shown. Those errors were because of that NuGet package version issue mentioned previously. It wasn’t until I had the bot published that I realized there was an error, and went back and debugged it locally.

Testing the Published Bot in Skype

I published and registered the bot primarily to be able to use the Direct Line channel. But it’s a bot, so it makes sense to test it using a few different channels. Skype is a pretty obvious one, and is enabled by default, so I hit that first.


Through Skype, I was able to verify that it was published and worked as expected.

Using the Direct Line API

When you want to communicate to a bot from code, a good way to do it is using the Direct Line API. This REST API provides an additional layer of authentication and keeps everything within a structured bot framework. Without it, you might as well just make direct REST calls.

First, I needed to enable the Direct Line channel in the bot framework portal. Once I did that, I was able to configure it and get the super-secret key which enables me to connect to the bot. (The disabled field was a pain to try and copy/paste, so I just did a view source, and grabbed the key from the HTML.)


That’s all I needed to do in the portal. Next, I needed to set up the client to speak to the Direct Line API.

First, I added the Microsoft.Bot.Connector.DirectLine NuGet package to the UWP app. After that, I wrote a pretty small amount of code for the actual communication. Thanks to my colleague, Shen Chauhan (@shenchauhan on Twitter), for providing the boilerplate in his Hunt the Wumpus app.

private const string _botBaseUrl = "(the url to the bot /api/messages)";
private const string _directLineSecret = "(secret from direct line config)";

private DirectLineClient _directLine;
private string _conversationId;

public async Task ConnectAsync()
    _directLine = new DirectLineClient(_directLineSecret);

    var conversation = await _directLine.Conversations
    _conversationId = conversation.Body.ConversationId;

    System.Diagnostics.Debug.WriteLine("Bot connection set up.");

private async Task<string> GetResponse()
    var httpMessages = await _directLine.Conversations

    var messages = httpMessages.Body.Messages;

    // our bot only returns a single response, so we won't loop through
    // First message is the question, second message is the response
    if (messages?.Count > 1)
        // select latest message -- the response
        var text = messages[messages.Count-1].Text;
        System.Diagnostics.Debug.WriteLine("Response from bot was: " + text);

        return text;
        System.Diagnostics.Debug.WriteLine("Response from bot was empty.");
        return string.Empty;

public async Task<string> TalkToTheUpsideDownAsync(string message)
    System.Diagnostics.Debug.WriteLine("Sending bot message");

    var msg = new Message();
    msg.Text = message;

    await _directLine.Conversations.PostMessageAsync(_conversationId, msg);

    return await GetResponse();

The client code calls the TalkToTheUpsideDownAsync method, passing in the question. That method fires off the message to the bot, via the Direct Line connection, and then waits for a response.

Because the bot sends only a single message, and only in response to a question, the response comes back as two messages: the first is the message sent from the client, the second is the response from the service. This helps to provide context.

Finally, I wired it to the SendQuestion button on the UI. I also wrapped it in calls to start and stop the MIDI clock, giving us a bit of Stranger Things thinking music while the call is being made and the result displayed on the LEDs.

private async void SendQuestion_Click(object sender, RoutedEventArgs e)
    // start music

    // send question to service
    var response = await _botInterface.TalkToTheUpsideDownAsync(QuestionText.Text);

    // display answer
    await RenderTextAsync(response);

    // stop music

With that, it is 100% complete and ready for demos!

What would I change?

If I were to start this project anew today and had a bit more time, there are a few things I might change.

I like the voice recognition, Bot Framework, and LUIS stuff. Although I could certainly make the conversation more interactive, there’s really nothing I would change there.

On the electronics, I would use a breadboard-friendly Arduino, not hot-glue an Arduino to the back. It pains me to have hot-glued the Arduino to the board, but I was in a hurry and had the glue gun at hand.

I would also use a separate power supply for LEDs. This is especially important if you wish to light more than one LED at a time, as eventually, the Arduino will not be able to support the current draw required by many LED lights.

If I had several weeks, I would have my friends at DF Robot spin a board that I design, rather than use a regular breadboard, or even a solder breadboard. I generally prefer to get boards spun for projects, as they are more robust, and DF Robot can do this for very little cost.

Finally, I would spend more time to find even uglier wallpaper <g>.

Here’s a photo of the wall, packaged up and ready for shipment to Las Vegas (at the time of this writing, it’s in transit), waiting in my driveway. The box was 55” tall, around 42” wide and 7” thick, but only about 25 lbs. It has ¼” plywood on both faces, as well as narrower pieces along the sides. In between the plywood is 2” thick rigid insulating foam. Finally, the corners are protected with the spongier corner form that came with that box.

It costs a stupid amount of money to ship something like that around, but it’s worth it for events. 🙂


After this, it’s going to Redmond where I’ll record a video walkthrough with Channel 9 during the second week of November.

What Next?

Windows Remote Wiring made this project quite simple to do. I was able to use the tools and languages I love to use (like Visual Studio and C#), but still get the IO of a device like the Arduino Uno. I was also able to use facilities available to a UWP app, and call into a simple bot of my own design. In addition to all that, I was able to use voice recognition and MIDI all in the same app, in a way that made sense.

The Bot Framework and LUIS stuff was all brand new to me, but was really fun to do. Now that I know how to connect app logic to a bot, there will certainly be more interactive projects in the future.

This was a fun project for me. It’s probably my last real maker project of the fall/winter, as I settle into the fall home renovation work and also gear up for the NAMM music event in January. But luckily, there have been many other posts here about Windows 10 IoT Core and our maker and IoT-focused technology. If this topic is interesting to you, I encourage you to take a spin through the archives and check them out.

Whatever gift-giving and receiving holiday you celebrate this winter, be sure to add a few Raspberry Pi 3 devices and some Arduino Uno boards on your list, because there are few things more enjoyable than cozying up to a microcontroller or some IoT code on a cold winter’s day. Oh, and if you steal a strand or two of lights from the tree, I won’t tell. 🙂


Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on Twitter @pete_brown

Most of all, thanks for reading!

Read More

New and updated Microsoft IoT Kits

Earlier this month, we released to customers around the world a new Windows Insider version of Windows 10 IoT Core that supports the brand new Intel® Joule™. We’ve been working hard on Windows 10 IoT Core, we’re proud of the quality and capability of IoT Core Insider releases and we’re humbled by the enthusiasm that you’ve shown in using it to build innovative devices and downright cool Maker projects.

We’ve spoken to thousands of you around the world at both commercial IoT events and Maker Faires and in many of these conversations, you have asked for better ways to get started – how to find the quickest path to device experimentation using Windows 10 and Azure. We’ve heard your feedback and today I’d like to talk about how this is manifesting in two new IoT starter kits from our partners: The Microsoft Internet of Things Pack for Raspberry Pi 3 by Adafruit, and the brand new Seeed Grove Starter Kit for IoT based on Raspberry Pi by Seeed Studio.

Back in September of 2015 we partnered with Adafruit to make a Raspberry Pi 2 based Windows 10 IoT Core Starter Kit available. This kit was designed to get you started quickly and easily on your path of learning both electronics and Windows 10 IoT Core and the Raspberry Pi 2. Adafruit had tremendous success with this kit, and we’re happy to announce that they are releasing a new version of it.


This new kit keeps its focus on helping you get started quickly and easily in the world of IoT, but includes an upgrade to the new Raspberry Pi 3.

The best thing about this update? The price is the same as before.

The newest kit available is from Seeed Studio, called the Grove Starter Kit for IoT based on Raspberry Pi, builds on the great design work that Seeed and their partner Dexter Industries have done around the Grove connector. It utilizes a common connector from the large array of available sensors to simplify the task of connecting to the device platform. This helps you focus on being creative and not worrying about soldering electrical connections.


The selection of compatible modular devices extends way beyond those that are included in the kit, making this applicable to starters, Makers and Maker Pros. The Seeed Kit can be ordered from the Microsoft Store, Seeed Studio or you can also acquire the kit from Digi-Key.

We’re excited about how these kits help enable everyone, from those with no experience to those who prototype for a living, to quickly get started making new devices with Windows 10 IoT Core, Azure IoT and the Raspberry Pi 3.

We can’t wait to see what you make!

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Read More

Device to Device Communication with Azure IoT Hub

We’ve already seen how to get your devices to send data to the cloud. In most cases, the data is then processed by various Azure services. However, in some cases the cloud is nothing more than a means of communication between two devices.

Imagine a Raspberry Pi connected to a garage door. Telling the Raspberry Pi to open or close the door from a phone should only require a minimal amount of cloud programming. There is no data processing; the cloud simply relays a message from the phone to the Pi. Or perhaps the Raspberry Pi is connected to a motion sensor and you’d like an alert on your phone when it detects movement.

In this blog post, we will experiment with implementing device-to-device communication with as little cloud-side programming as possible. A common pipeline for device-to-device communication involves device A sending a message to the cloud, the cloud processing the message and sending it to device B, and device B receiving this message. In minimizing that middle step, you can create a functional app that only requires the free tier of Azure IoT Hub. It’s a cheap and effective way to design device-to-device communication. So, can two devices talk to each other with almost no cloud-based programming?

The answer, of course, is yes. In order to do this, it is important to understand how Azure IoT Hub and the Azure IoT messaging APIs work. Currently, Azure IoT messaging involves two different APIs – Microsoft.Azure.Devices.Client is used in the app running on the device (it can send device-to-cloud and receive cloud-to-device messages) and Microsoft.Azure.Devices SDK and the ServiceBus SDK are used on the service side (it can send cloud-to-device messages and receive device-to-cloud messages). However, our design proposes something slightly unorthodox. We will run the service SDK on the device receiving messages, so less code goes into the cloud.

To take advantage of the latest advances in security, we will provision our device to securely connect to Azure with the help of the TPM (see our earlier blog post that introduced TPM).

This approach uses a many-to-one messaging model. It allows for a simple design, but limits our capabilities. While many devices can send messages, only one can receive. In order to only accept messages from a specific device, the receiver will filter the messages by the device id.

How does all this work?

For a full sample, see the code here. There are two solutions within this project, as described above. The use of the SDKs in each solution remains mostly unchanged from the standard design outlined in here. We decided to run the service side SDK on the receiving device – however, there is one roadblock. One of the two service side SDKs, ServiceBus, does not support UWP. Fortunately, another library called the AMPQNetLite offers a UWP compatible alternative that can be used to send and receive messages on the service side. This requires a little more work: we needed to connect to the event hub port that IoT Hub exposes, create a session and build the receiver link.

All the connection information needed to set up a receiver with AMQPNetLite can be found in your instance of IoT Hub. You can also use this library to filter incoming messages by device id. See this sample for further details.

What next?

This experiment intentionally keeps the amount of cloud-based programming to a minimum (zero, really). Even still, this opens a set of new opportunities. With this, an IoT device can be remote controlled by any Windows device. However, this system has limitations. Any complex message filtering is currently not supported. Extending this solution to be cross platform (using Android or iOS devices) also proves to be difficult, as AMPQNetLite is not compatible with Xamarin.

If you’re willing to do more and utilize more cloud services (including paid ones), advanced messaging patterns, sophisticated data analysis and long term storage become possible. In particular, Azure Functions allow you to run the receiving code in the cloud, which obviates the need of running AMPQNetLite on the client device.

This blog post focused on the simplest pipeline for device-to-device communication, but what we have built is by no means the only solution. We’re eager to hear your feedback and welcome your ideas on what more can be done.

Divya Mahadevan, a software engineer intern, contributed to this piece (thanks, Divya!)

Read More

Announcing Intel® Joule™ support in Windows 10 IoT Core Anniversary Edition

Two weeks ago I was excited to see us unveil Windows 10 IoT Core Anniversary Edition running on the new Intel® Joule™, built on the new Intel® Atom™ T5700 SOC. It took us a little over 6 weeks to bring up Windows 10 IoT Core and build Bamboo, the first Windows 10 IoT Core and Intel® Joule™ powered robot. We look forward to seeing what you think of Windows 10 IoT Core for the Intel® Joule™ when it becomes available as part of the Windows Insider Program release scheduled for September.


Bamboo, the first Windows 10 IoT Core robot running on the Intel® Joule™

To give you a peek into what you can do with this we created Bamboo, a companion robotic panda. Running Windows 10 IoT Core on the Intel® Joule™ compute platform, Bamboo connects to cloud-based Microsoft Azure* and Cognitive Services, which provide sentiment analysis and translation, so you can speak to Bamboo in any language and she can translate into her native English and understand your mood via Twitter. Bamboo can move about and build an understanding of her environment with the compute platform and an Intel® RealSense™ camera. Additionally, she is animated via the EZ-Robot EZ-B control system.

All of this happened at the Intel Developer Forum 2016 in San Francisco. At that event, we also had a number of partners showing off what Windows 10 IoT Core can do.

EZ-Robot integrated the Intel® Joule™ with Windows 10 IoT Core into their EZ-B V5, making EZ-Robot the first custom integrator of the Intel® Joule™ using Windows 10 IoT Core. EZ-Robot displayed the new control module and multiple robots running on this new platform. Using the EZ-Builder software for Windows and a newly released EZ-Robot UWP library, you can design and animate your robots on a Windows PC, as well as auto-generate and export UWP control code to run on the EZ-B V5. We used this functionality to animate Bamboo. The bring up and integration of the EZ-B v5 took just under 6 weeks.


Design Mill highlighted Torch, a mixed reality interactive gaming table. Built on the Intel® Joule™, Intel RealSense camera and Windows 10 IoT Core, Torch enables interactive gaming mixing perception, projection and blending the physical/digital divide.


Joining our other outstanding partners like Raspberry Pi, Arduino, Adafruit and Seeed Studio, great partnerships like the ones we have with Intel, EZ-Robot and Design Mill that produce boards and systems, helps make Windows 10 IoT Core the best platform to build your IoT solutions on. When you add in Microsoft Azure IoT and Microsoft Cognitive Services, you get a cloud-connected, manageable, intuitive and, above all, human-like platform to build upon.

The Windows Insider program makes it very easy for you to get access to all of the above, as well as also the latest pre-releases of Windows 10 IoT Core (which will soon support the Intel® Joule™ module). Once you’ve installed the pre-release, we enable the OS to automatically upgrade so you’ll be able to try out the latest features in the next pre-release of Windows 10 IoT Core as we make each release – No more looking for updates on web pages or developer centers. We’re excited by this and we hope you’ll take the opportunity to join the many developers who are already using the Windows Insider Program.

We can’t wait to see what you make!

Read More

Hyper-V Hot Topics – July

   Hello everyone! July is behind us and that means it’s time for another edition of our Hyper-V Hot Topics Series! Again, as a reminder, this series focuses on interesting links and news regarding Hyper-V from throughout the previous month, that I’ve found to be helpful and useful. In addition to this, I also like to post my Hyper-V Monday Minute recording from throughout the last month as well. For those that aren’t aware, I put on, what I call the Hyper-V Monday Minute every Monday at 2:00 PM Eastern time where I talk about some topic from the Hyper-V world. I’ve used a number of different formats for this, but have now finalized it on Facebook Live. if your interested in subscribing to that segment, you can do so by liking the Altaro Software facebook page HERE. The idea here is to serve as your one-stop-shop information source for… Read More»

Read the post here: Hyper-V Hot Topics – July

Read More

Microsoft grants help kids learn computer science, Earth Day is celebrated and influential engineer is honored — Weekend Reading: April 22 edition

From a huge effort to help kids realize their potential to a celebration of our dear old planet, this week brought plenty of interesting and inspiring news around Microsoft. We’ve rounded up some of the highlights in this latest edition of Weekend Reading.

Earlier this week, Microsoft announced grants to 100 nonprofit partners in 55 countries as part of YouthSpark, a global initiative to increase access for young people to learn computer science. In turn, these nonprofit partners — such as Laboratoria, CoderDojo and City Year — will use the power of local schools, businesses and community organizations to empower students to achieve more for themselves, their families and their communities.

The nonprofits will build upon the work that Microsoft already has underway through programs like Hour of Code with Code.org, BBC micro:bit and TEALS.

Every young person should have an opportunity, a spark, to realize a more promising future,” Mary Snapp, corporate vice president and head of Microsoft Philanthropies, wrote in a blog post on Wednesday. “Together with our nonprofit partners, we are excited to take a bold step toward that goal today.”

WR Youthspark image

Wondering what the next wave of breakthrough technology will be? Harry Shum, executive vice president of Microsoft Technology and Research, calls it an “invisible revolution,” and it’s transforming farming, allowing people from different cultures to communicate, helping people breathe healthier air, preventing disease outbreaks and much more.

“We are on the cusp of creating a world in which technology is increasingly pervasive but is also increasingly invisible,” Shum said.

This week on the Microsoft Facebook page, we joined the invisible revolution to preview the latest, most cutting-edge developments in artificial intelligence, machine learning and cloud computing. The possibilities are endless.

Invisible revolution GIF

Computer industry luminaries honored Dave Cutler, a Microsoft senior technical fellow whose impressive body of work spans five decades, as a Computer History Museum Fellow. The 74-year-old has shaped entire eras. He worked to develop the VMS operating system for Digital Equipment Corporation in the late 1970s, had a central role in the development of Windows NT — the basis for all major versions of Windows since 1993 — and helped develop the Microsoft Azure cloud operating system and the hypervisor for Xbox One that allows the console to be more than just for gaming.

“The Fellow awards recognize people who’ve had a tremendous impact on our lives, on our culture, on the way we work, exchange information and live,” said John Hollar, the museum’s president and CEO. “People like Dave Cutler, who probably influences the computing experiences of more than 2 billion people, yet isn’t known in a way he deserves to be, in proportion to the impact he’s had on the world.”

WR Engineer award

Microsoft Philanthropies sponsored the annual We Day, supporting exciting events Wednesday in Seattle and earlier this month in Los Angeles. Nearly 30,000 attended the shows, which celebrate young people who are making a difference.

In supporting We Day, Microsoft aims to help young people drive the change they would like to see in their neighborhoods, schools and communities. Our photo gallery captures the highlights, famous faces and young people who were involved in this year’s events.

WR_We day

In advance of Earth Day on Friday, Microsoft kicked off this week with inspiration and information about the company’s sustainability programs and initiatives, including ways you can take part in the efforts. The  brand new Environmental Sustainability at Microsoft website details how Microsoft’s company-wide carbon fee have financed significant investments in renewable energy to power its data centers, improved building efficiency and reached more than 6 million people through the purchase of carbon offsets from community projects around the world.

Microsoft, which has been a carbon-neutral company since 2012, is continually finding ways to make its products and their lifecycles more earth-friendly. Learn more about how Microsoft is commemorating Earth Day on the Microsoft Green Blog.

WR_earth day

Microsoft is also constantly working to help students achieve more. Some all-new education features coming in the Windows 10 Anniversary Update are specifically inspired by teachers and focused on students. A “Set Up School PCs” app lets teachers set up a device themselves in mere minutes, and a new “Take a Test” provides simple and secure standardized testing for classrooms or entire schools.

Learning will also get a big boost with Microsoft Classroom and Microsoft Forms, a OneNote Class Notebook that now has Learning Management System (LMS) integration and — perhaps most exciting to students — the dawn of “Minecraft: Education Edition.” Educators will be able to give it a test run in the summer months and provide feedback and suggestions.

In apps this week, the powerful mobile photo-editing app PicsArt is marking Earth Day by offering a series of green- and outdoorsy-themed photo frame and clip art packages. Several are exclusive to Windows customers. The PicsArt app is free in the Windows Store.

Need a little help juggling projects, priorities and other moving parts in your busy life? The Todoist Windows 10 app can help you stay organized, collaborate with colleagues and even empty your inbox by turning important emails into tasks.

Or for a little fun this weekend, go way beyond retro to prehistoric days in “Age of Cavemen.” In this multiplayer strategy game, you’re the village chief in a dangerous world, and you need to keep your people safe. Build an army, create alliances and destroy your opponents in a wild and wooly free-for-all.

WR apps image

And that’s a wrap for this edition of Weekend Reading. See you here next week for the latest roundup.

Posted by Tracy Ith
Microsoft News Center Staff

The post Microsoft grants help kids learn computer science, Earth Day is celebrated and influential engineer is honored — Weekend Reading: April 22 edition appeared first on The Official Microsoft Blog.

Read More