Tag Archives: Internet of things

Node-ChakraCore Update: N-API, Node.js on iOS and more

Today, we are happy to announce a new preview release of ChakraCore, based on Node.js 8, available for you to try on Windows, macOS, and Linux.

We started our Node-ChakraCore journey with a focus on extending the reach of Node.js to a new platform, Windows 10 IoT Core. From the beginning, it’s been clear that in addition to growing the reach of Node.js ecosystem, there’s a need to address real problems facing developers and the Node.js ecosystem though innovation, openness and community collaboration.

As we continue our journey to bring fresh new ideas and enable the community to imagine new scenarios, we want to take a moment to reflect on some key milestones we’ve achieved in the last year.

Full cross-platform support

While ChakraCore was born on Windows, we’ve always aspired to make it cross-platform. At NodeSummit 2016, we announced experimental support for the Node-ChakraCore interpreter and runtime on Linux and macOS.

In the year since that announcement, we’ve brought support for full JIT compilation and concurrent and partial GC on x64 to both macOS and Ubuntu Linux 14.04 and higher. This has been a massive undertaking that brings Node-ChakraCore features to parity across all major desktop operating systems. We are actively working on cross-platform internationalization to complete this support.

Support for Node.js API (N-API)

This year, our team was part of the community effort to design and develop the next-generation Node.js API (N-API) in Node.js 8 which is fully supported in ChakraCore. N-API is a stable Node API layer for native modules, which provides ABI compatibility guarantees across different Node versions & flavors. This allows N-API-enabled native modules to just work across different versions and flavors of Node.js, without recompilations.

According to some estimates, 30% of the module ecosystem gets impacted every time there is a new Node.js release, due to lack of ABI stability.  This causes friction in Node.js upgrades in production deployments and adds cost to native module maintainers in having to maintain several supported versions for their module.

Node.js on iOS

We are always delighted to see the community build and extend Node-ChakraCore in novel and interesting ways. Janea Systems recently announced their experimental port of Node.js to run on iOS, powered by Node-ChakraCore. This takes Node.js to iOS for the first time, expanding the reach of the Node.js ecosystem to an entire new category of devices.

Node.js on iOS would not be possible without Node-ChakraCore. Because of the JITing restrictions on iOS, stock Node.js cannot run. However, Node-ChakraCore can be built to use the interpreter only, with the JIT completely turned off.

This is particularly useful for scenarios like offline-first mobile apps designed with the expectation of unreliability connectivity or limited bandwidth. These apps primarily rely on local cache on the device, and use store and forward techniques to opportunistically use data connectivity when available. These kinds of apps are common in scenarios like large factory floors, remote oil rigs, disaster zones, and more.

Time-Travel Debugging

This year also brought the debut of Time-Travel debugging with Node-ChakraCore on all the supported platforms, as originally demoed using VSCode at NodeSummit 2016. This innovation directly helps with the biggest pain-point developers have with Node.js – debugging! With this release, Time-Travel Debugging has improved in stability and functionality since its introduction, and is also available with Node-ChakraCore on Linux and macOS.

 And much more …

These are just the start – our team has also made major investments in infrastructure automation, which have resulted in faster turnaround of Node-ChakraCore updates following the Node.js 8. Both stable Node-ChakraCore builds and nightlies are now available from the Node.js foundation build system.

We recently started measuring module compatibility using CITGM modules, and have improved compatibility with a wide variety of modules. Popular node modules like, node-sass, express and body-parser are considering using Node-ChakraCore in their CI system to ensure ongoing compatibility.  Node-ChakraCore also has improved 15% in ACMEAir performance on Linux in the last 2 months, and we’ve identified areas to make further improvements in the near future.

With our initial priority of full cross-platform support behind us, we are moving our focus to new priorities, including performance and module compatibility. These are our primary focus for the immediate future, and we look forward to sharing progress with the community as it happens!

Get involved

As with any open source project, community participation is the key to the health of Node-ChakraCore. We could not have come this far in our journey without the help of everyone who is active on our github repo, and in the broader Node community, for their reviews and guidance.  We are humbled by your enthusiasm and wish to thank you for everything you do. We will be counting on your continued support as we make progress in our journey together.

For those who are looking to get involved outside of directly contributing code, there are several ways to get involved and advance the Node-ChakraCore project. If you are a …

  1. Node.js Developer – Try testing Node-ChakraCore in your project, and use Time-Travel debugging with VSCode and let us know how it goes.
  2. Node.js module maintainer – Try testing your module with Node-ChakraCore. Use these instructions to add Node-ChakraCore in your own CI to ensure ongoing compatibility. If you run into issues, please let us know at our repo or our gitter channel.
  3. Native module maintainer – Consider porting your module to N-API. This will help insulate your module from breakage due to new Node releases and will also work with Node-ChakraCore.

As always, we are eager to hear your feedback, so please keep them coming. Find us on twitter @ChakraCore, our gitter channel or you can open an issue on our github repo to start a conversation.

Arunesh Chandra, Senior Program Manager, Chakra

Dev Projects for the Long Weekend

Find your favorite chair, kick your feet up and grab yourself a cup of coffee. It’s finally time to pick up some of the dev projects you’ve been eyeballing from behind the hazy fog of pre-vacation deadlines.

For the long weekend, we’ve assembled a quick list of three projects we’ll be working on between family time and scouring Stack Overflow for answers to life’s questions. Take a look below to get started!

IoT for the Whole Family

IoT projects can be both fun and practical – take a look at a few of our favorite selections below. After securing your home with an IoT Security Camera and automating the rest of your house, you might really need to think about rewarding yourself with that automated Kegerator. Just saying.

Explore the Devices and Sensors GitHub Library

Thirty-five samples to work with right here, folks! With these samples, you can get familiar with the API usage patterns that make the UWP unique. This section has code samples for accelerometers, relative inclinometers and pretty much everything in between.

Dive into Microsoft Open Source Code

We’re still reeling in the excitement about our partnership with the Linux Foundation and the steps we’re taking to make the UWP an increasingly open platform. Take a look at our existing Open Source projects over on Github to see how you can get started.

And that’s all! Have a great long weekend and remember to tweet us if you have any questions or comments!

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

In Case You Missed it – This Week in Windows Developer

Happy (belated) Halloween, Windows Devs! This past week gave 80s kids, pop culture fans and Windows Devs alike all a chance to celebrate the internet of things.

Our very own IoT master, Pete Brown, created a series on IoT, remote wiring, voice recognition and AI inspired by the Netflix hit, Stranger Things. Check it out below!

2016-10-31_strangerthings

Internet of Stranger Things Part 1

TL;DR – go ahead and binge watch the series before getting started.

Internet of Stranger Things Part 2

Pete Brown builds a wall. But it’s more than that – Pete adds to the Internet of Stranger Things project by constructing a wall that integrates music and UWP MIDI capabilities. Learn how to cue up your very own haunting 80s synth soundtrack with part 2!

Internet of Stranger Things Part 3

The final installment of the series covers voice recognition and intelligence – two things most IoT devices don’t necessarily support. Low and behold, Pete Brown works his IoT magic in this post.

#XboxAppDev – Adding natural inputs

This post gets personal (with input methods). Learn how to add natural, intuitive input methods to your Xbox and UWP apps.

2016-11-04_speechandink

UWP Integrations for Kinect

Grab your demo hat and get ready for the new drivers and integrations now available for Kinect and UWP. Read more in this blog:

Windows 10 Insider Preview Build 14959 for Mobile and PC

Last, but certainly not least, we released a new build for Windows Insiders in the Fast Ring. There are quite a few updates here, most notably the new ‘Universal Update Platform’ which helps streamline updates across your Windows 10 devices.

And that’s the week in Windows Dev! Feel free to tweet us with any questions, comments or suggestions for Pete Brown’s next example of IoT wizardry.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

 

The “Internet of Stranger Things” Wall, Part 2 – Wall Construction and Music

Overview

I do a lot of woodworking and carpentry at home. Much to my family’s chagrin, our house is in a constant state of flux. I tend to subscribe to the Norm Abram school of woodworking, where there are tools and jigs for everything. Because of this, I have a lot of woodworking and carpentry tools around. It’s not often I get to use them for my day job, but I found just the way to do it.

In part one of this series, I covered how to use Windows Remote Wiring to wire up LEDs to an Arduino, and control from a Windows 10 UWP app. In this post, we’ll get to constructing the actual wall.

This post covers:

  • Constructing the Wall
  • Music and UWP MIDI
  • Fonts and Title Style

The remainder of the series will be posted this week. Once they are up, you’ll be able to find the other posts here:

  • Part 1 – Introduction and Remote Wiring
  • Part 2 – Constructing the wall and adding Music (this post)
  • Part 3 – Adding voice recognition and intelligence

If you’re not familiar with the wall, please go back and read part 1 now. In that, I described the inspiration for this project, as well as the electronics required.

Constructing the Wall

In the show Stranger Things, “the wall” that’s talked about is an actual wall in a living room. For this version, I considered a few different sizes for the wall. It had to be large enough to be easily visible during a keynote and other larger-room presentations, but small enough that I could fit it in the back of the van, or pack in a special box to (expensively) ship across the country. That meant it couldn’t be completely spread out like the one in the TV show. But at the same time, the letters still had to be large enough so that they looked ok next to the full-size Christmas lights.

Finally, I didn’t want any visible seams in the letter field, or anything that would need to be rewired or otherwise modified to set it up. Seams are almost impossible to hide well once a board has traveled a bit. Plus, demo and device-heavy keynote setup is always very time-constrained, so I needed to make sure I could have the whole thing set up in just a few minutes. Whenever I come to an event, the people running it are stunned by the amount of stuff I put on a table. I typically fill a 2×8 table with laptops, devices, cameras, and more.

I settled on using a 4’ x 4’ sheet of ½” plywood as the base, with poplar from the local home store as reinforcement around the edges. I bisected the plywood sheet to 32” and 16” to make it easier to ship and also so it would easily fit in the back of the family van for the first event we drove to.

The wallpapered portion of the wall ended up being 48” wide and 32” tall. The remaining paneled portion is just under 16” tall. The removable bottom part turned out to be quite heavy, so I left it off when shipping to Las Vegas for DEVintersection.

To build the bottom panel, I considered getting a classic faux wood panel from the local Home Depot and cutting it to size for this. But I really didn’t want a whole 4×8 sheet of fake wood paneling laying around an already messy shop. So instead I used left-over laminate flooring from my laundry room remodel project and cut it to length. Rather than snap the pieces tight together, I left a gap, and then painted the gaps black to give it that old 70s/80s paneling look.

picture1

picture2

The size of this version of the wall does constrain the design a bit. I didn’t try to match the same layout that the letters had in the show, except for having the right letters on the right row. The wall in the show is spaced out enough that you could easily fill a full 4×8 sheet and still look a bit cramped.

The most time-consuming part of constructing the wall was finding appropriately ugly wallpaper. Not surprisingly, a search for “ugly wallpaper” doesn’t generally bring up items for sale :). In the end, I settled for something that was in roughly the same ugliness class as the show wallpaper, but nowhere near an actual match. If you use the wallpaper I did, I suggest darkening it a bit with a tea stain or something similar. As-is, it’s a bit too bright.

Note that the price has gone up significantly since I bought it (perhaps I started an ugly wallpaper demand trend?), so I encourage you to look for other sources. If you find a source for the exact wallpaper, please do post it in the comments below!

Another option, of course, is to use your art skills and paint the “wallpaper” manually. It might actually be easier than hanging wallpaper on plywood, which as it turns out, is not as easy as it sounds. In any case, do the hanging in your basement or some other place that will be ok with getting wet and glued-up.

Here it is with my non-professional wallpaper job. It may look like I’m hanging some ugly sheets out to dry, but this is wallpaper on plywood.

picture3

When painting the letters on the board, I divided the area in three sections vertically, and used a leftover piece of flooring as a straight edge. That helped there, but didn’t do anything for my letter spacing / kerning.

To keep the paint looking messy, I used a cheap 1” chip brush as the paint brush. I dabbed on a bit extra in a few places to add drips, and went back over any areas that didn’t come out quite the way I wanted, like the letter “G.”

picture4

Despite measuring things out, I ran out of room when I got to WXYZ and had to squish things together a bit. I blame all the white space around the “V”. There’s a bit of a “Castle of uuggggggh” thing going on at the end of the painted alphabet.

picture5

Once the painting was complete, I used some pre-colored corner and edge trim to cover the top and bottom and make it look a bit more like the show. I attached most trim with construction glue and narrow crown staples (and cleaned up the glue after I took the above photo). If you want to be more accurate and have the time, use dark stained pine chair rail on the bottom edge, between the wallpapered section and the paneled section.

Here you can see the poplar one-by support around the edges of the plywood. I used a combination of 1×3 and 1×4 that I had around my shop. Plywood, especially plywood soaked with wallpaper paste, doesn’t like to stay flat. For that reason, as well as for shipping reasons, the addition of the poplar was necessary.

picture6

You can see some of the wiring in this photo, so let’s talk about that.

Preparing and Wiring the Christmas lights

There are two important things to know about the Christmas lights:

  1. They are LEDs, not incandescent lamps.
  2. They are not actually wired in a string, but are instead individually wired to the control board.

I used normal 120v AC LED lights. LEDs, like regular incandescent lamps, don’t really care about AC or DC, so it’s easy enough to find LEDs to repurpose for this project. I just had to pick ones which didn’t have a separate transformer or anything odd like that. Direct 120v plug-in only.

The LED lights I sacrificed for this project are Sylvania Stay-Lit Platinum LED Indoor/Outdoor C9 Multi-Colored Christmas Lights. They had the right general size and look. I purchased two packs for this because I was only going to use the colors actually used on the show and also because I wanted to have some spares for when the C9 housings were damaged in transit, or when I blew out an LED or two.

There are almost certainly other brands that will work, as long as they are LED C9 lamps and the wires are wrapped in a way that you can unravel.

When preparing the lamps, I cut the wires approximately halfway between the two lamps. I also discarded any lamps which had three wires going into them, as I didn’t want to bother trying to wire those up. Additionally, I discarded any of the lumps in the wires where fuses or resistors were kept.

picture7

For one evening, my desk was completely covered in severed LED Christmas lamps.

Next, I figured out the polarity of the LED leads and marked them with black marker. It’s important to know the anode from the cathode here because wiring in reverse will both fail to work, and likely burn out the LED, making subsequent trials also fail. Through trial and error, I found the little notch on the inside of the lamp always pointed in the same way, and that it was in the same position relative to the outside clip.

Once marked, I took note of the colors used on the show and following the same letter/color pairings, drilled an approximately ¼” hole above each letter and inserted both wires for the appropriate colored lamp through to the back. Friction held them in place until I could come through with the hot glue gun and permanently stick them there.

From there, I linked each positive (anode) wire on the LEDs together by twisting the wires together with additional lengths of wire and taping over them with electrical tape. The wire I used here was spare wire from the light string. This formed one continuous string connecting all the LED anodes together.

Next, I connected the end of that string to the +3.3v output on the Arduino. 3.3v is plenty to run these LEDs. The connection is not obvious in the photos, but I used a screw on the side of the electronics board and wired one end to the Arduino and the other end to the light string.

Finally, I wired the negative (cathode) wires to their individual terminals on the electronics board. I used a spool of heavier stranded wire here that would hold up to twisting around the screw terminals. For speed, I used wire nuts to connect those wires to the cathode wire on the LED. That’s all the black wire you see in this photo.

picture8

To make it look like one string of lights, I ran a twisted length of the Christmas light wire pairs (from the same light kit) through the clips on each lamp. I didn’t use hot glue here, but just let it go where it wanted. The effect is such that it looks like one continuous strand of Christmas lights; you only see the wires going into the wall if you look closely.

picture9

I attached the top and bottom together using 1×3 maple boards that I simply screwed to both the top and bottom, and then disassembled when I wanted to tear it down.

gif1

The visuals were all done at that point. I could have stopped there, but one of my favorite things about Stranger Things is the soundtrack. Given that a big part of my job at Microsoft is working with musicians and music app developers, and with the team which created the UWP MIDI API, I knew I had to incorporate that into this project.

Music / MIDI

A big part of the appeal of Stranger Things is the John Carpenter-style mostly analog synthesizer soundtrack by the band Survive (with some cameos by Tangerine Dream). John Carpenter, Klaus Shulze and Tangerine Dream have always been favorites of mine, and I can’t help but feel a shiver when I hear a good fat synth-driven soundtrack. They have remained my inspiration when recording my own music.

So, it would have been just wrong of me to do the demo of the wall without at least some synthesizer work in the background. Playing it live was not an option and I wasn’t about to bring a huge rig, so I sequenced the main arpeggio and kick drum in my very portable Elektron Analog Four using some reasonable stand-ins for the sounds.

At the events, I would start and clock the Analog Four using a button on the app and my Windows 10 UWP MIDI Library clock generator. The only lengthy part of this code is where I check for the Analog Four each time. That’s a workaround because my MIDI library, at the time of this writing, doesn’t expose the hardware add/remove event. I will fix that soon.


private void StartMidiClock()
{
    // I do this every time rather than listen for device add/remove
    // becuase my library didn't raise the add/remove event in this version
    SelectMidiOutputDevices();

    _midiClock.Start();

    System.Diagnostics.Debug.WriteLine("MIDI started");
}

private void StopMidiClock()
{
    _midiClock.Stop();

    System.Diagnostics.Debug.WriteLine("MIDI stopped");
}


private const string _midiDeviceName = "Analog Four";
private async void SelectMidiOutputDevices()
{
    _midiClock.OutputPorts.Clear();

    IMidiOutPort port = null;

    foreach (var descriptor in _midiWatcher.OutputPortDescriptors)
    {
        if (descriptor.Name.Contains(_midiDeviceName))
        {
            port = await MidiOutPort.FromIdAsync(descriptor.Id);

            break;
        }
    }

    if (port != null)
    {
        _midiClock.OutputPorts.Add(port);
    }
}

For this code to work, I just set the Analog Four to receive MIDI clock and MIDI start/stop messages on the USB port. The sequence itself is already programmed in by me, so all I need to do is kick it off.

If you want to create a version of the sequence yourself, the main riff is a super simple up/down arpeggio of these notes:

picture10

You can vamp on top of that to bring in more of the sound from what S U R V I V E made. I left it as it was and simply played the filter knob a bit to bring it in. A short version of that may be found on my personal SoundCloud profile here.

There are many other components to the music, including a muted kick drum type of sound, a bass line, some additional melody and some other interesting effects, but I hope this helps get you started.

If you’re interested in the synthesizers behind the music, and a place to hear the music itself, check out this tour of S U R V I V E ’s studio.

The final thing that I needed to include here was a nod to the visual style of the opening sequence of the show.

Fonts and Title Style

If you want to create your own title card in a style similar to the show, the font ITC Benguiat is either the same one used, or a very close match. It’s readily available to anyone who wants to license it. I licensed it from Fonts.com for $35 for my own project. The version I ended up using was the regular book font, but I think the Condensed Bold is probably a closer fit.

Even though there are tons of pages, sites, videos, etc. using the title style, be careful about what you do here, though, as you don’t want to infringe on the show’s trademarks or other IP. When in doubt, consult your lawyer. I did.

picture11

That’s using just the outline and glow text effects. You can do even better in Adobe Photoshop, especially if you add in some lighting effects, adjust the character spacing and height, and use large descending capital letters, like I did at the first event. But I was able to quickly do this above mockup in PowerPoint using the ITC Benguiat font.

If you don’t want to license a font and then work with the red glow in Adobe Photoshop, you can also create simple versions of the title card at http://makeitstranger.com/

None of that is required for the wall itself, but can help tie things together if you are presenting several related and themed demos like I did. Consider it a bit of polish.

With that, we have the visuals and sound all wrapped up. You could use the wall as-is at this point, simply giving it text to display. That’s not quite enough for what I wanted to show, though. Next up, we need to give the bot a little intelligence, and save on some typing.

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on twitter @pete_brown

Most of all, thanks for reading!

The “Internet of Stranger Things” Wall, Part 1 – Introduction and Remote Wiring

Overview

I am a child of the 80s. Raiders of the Lost Ark was the first movie I saw by myself in the movie theater. The original Star Wars trilogy was an obsession. And WarGames is one thing that inspired me more than anything else to become a programmer.

But it was movies like The Goonies that I would watch over and over again because they spoke to me in a language that reflected what it was like to be a kid at that time. They took kids on a grand adventure, while still allowing them to be kids in a way that so few movies can pull off.

So, of course when a friend pointed out the Netflix series Stranger Things, I dove right in, and while sitting down at my PC I binge-watched every episode over a weekend. It had a treatment of 80s childhood that was recognizable, without being a painful cliché. It referenced movies like The Goonies, ET, and The X-Files in a really fun way.

If you haven’t yet watched the series, go ahead and watch it now. This blog post will still be here when you finish up. 🙂

One of the most iconic scenes in the movie is when Winona Ryder, herself a star of some of my favorite 80s and 90s movies, uses an alphabet wall made of Christmas lights to communicate with her son Will, who is stuck in the Upside Down.

While not physically there, Will could still hear her. So, she would ask him a question and he would respond by lighting up the individual Christmas light associated with each letter on the wall. In the show, the alphabet wall takes up one whole wall in her living room.

I won’t go into more detail than that because I don’t want to spoil the show for those who have not yet seen it or for those who didn’t take my advice to stop and watch it now.

Here’s my smaller (approximately 4’ x 4’) version of the alphabet wall as used during my keynote at the TechBash 2016 conference in Pennsylvania:

image1

“Will? Will? Are you there?”

In the events I used it in, I put on a wig that sort-of resembled Winona’s frazzled hair in the series (but also made me look like part of a Cure cover band), and had my version of the theme/opening music playing on an Elektron Analog Four synthesizer/sequencer in the background. I then triggered the wall with a question and let it spell out the answer with the Christmas lights on the board.

Here’s a block diagram of the demo structure. You can see it involves a few different pieces, all of which are things I enjoy playing with.

image2

In this three-part series, I’ll describe how I built the wall, what products I used, how I built the app, how I built and communicated with the bot framework-based service, and how I made the music. In the end, you should have enough information to be able to create your own version of the wall. You’ll learn about:

  • Windows Remote Wiring
  • LCD Sink ICs
  • Constructing the Wall
  • Wiring the LED Christmas lights
  • Adding UWP voice recognition
  • Setting up a natural language model in LUIS
  • Building a Bot Framework-based bot
  • Music and MIDI
  • And more

There will be plenty of code and both maker and developer-focused technical details along the way.

This first post will cover:

  • Creating the UWP app
  • Windows Remote Wiring
  • Using the MBI5026 LED sink driver

If you’re unfamiliar with the show or the wall, and want to see a quick online-only version of a Stranger Things alphabet wall you can see one at http://StrangerThingsGIFGenerator.com. Example:

new-gif

The remainder of the series will be posted this week. Once they are up, you’ll be able to find the other posts here:

  • Part 1 – Introduction and Remote Wiring (this post)
  • Part 2 – Constructing the wall and adding music
  • Part 3 – Adding voice recognition and intelligence

Creating the basic UWP app

This app is something I used for demonstrating at a couple conferences. As such, it has an event-optimized UI — meaning big text that will show up well even on low contrast projectors. Additionally, it means I need a button to test the board (“Send Alphabet”), test MIDI (“Toggle MIDI”), echo back in case the network is down, and also submit some canned questions in case the network or bot service can’t be reached. When you do live demos, it’s always good to have backups and alternate paths so that a single point of failure doesn’t kill the entire demo. From experience, I can tell you that networks at venues, even speaker and keynote networks, are the single most common killer of cool demos.

This is the UI I put together.

image3

The microphone button starts voice recognition. In case of microphone failure (backups!) I can simply type in the text box — the message icon to the right submits the message. In the case of echo, it simply lights it up on the wall with the text, bypassing the online portion of the demo. In the case of the “Ask a question” field, it sends the message to a Bot Framework bot to be processed.

Despite the technologies I’m using, everything here starts with the standard C#/XAML UWP Blank App template in Visual Studio. I don’t need to use any specific IoT or bot-centric templates for the Windows 10 app.

I am on the latest public SDK version at the time of this post. This is important to note, because the NuGet MIDI library only supports that version (or higher) of the Windows 10 Anniversary Update SDK. (If you need to use an earlier version like 10586, you can compile the library from source.)

I use the Segoe MDL2 Assets font for the icons on the screen. That font is the current Windows standard iconography font. There are a few ways to do this in XAML. In this case, I just set the font and pasted in the correct Unicode value for the icon (you can use Character Map or another app if you wish). One very helpful resource that I use when working with this font is the ModernIcons.io Segoe MDL2 Assets – Cheatsheet site. It gives you the Unicode values in a markup-ready format, making it super easy to use in your XAML or HTML app.

image4

There’s also a free app which you may prefer over the site.

The rest of the UI is standard C# and XAML stuff (I’m not doing anything fancy). In fact, when it comes to program structure you’ll find this demo wanting. Why? When I share this source code, I want you to focus on what’s required to use any of these technologies rather than taking a cognitive hit trying to grok whatever design pattern I used to structure the app. Unless specifically trying to demonstrate a design pattern, I find over-engineered demo apps cumbersome to trod through when looking for a chunk of code to solve a specific problem.

Windows Remote Wiring Basics

When I built this, I wanted to use it as a way to demonstrate how to use Windows Remote Wiring (also called Windows Remote Arduino). Windows Remote Wiring makes it possible to use the IO on an Arduino from a Windows Store app. It does this by connecting to the Arduino through a USB or Bluetooth serial connection, and then using the Firmata protocol (which is itself built on MIDI) to transfer the pin values and other commands back and forth.

Typically used with a PC or phone, you can even use this approach with a Windows 10 IoT Core device and an Arduino. That’s a quick way to add additional IO or other capabilities to an IoT project.

For a primer on Remote Wiring, check the link above, or take a look at this video to learn a bit more about why we decided to make this possible:

Remoting in this way has slower IO than doing the work directly on the Arduino, but as an example this is just fine. If you were going to do something production-ready using this approach, I’d recommend bringing the calls up to a higher level and remote commands (like “Show A”) to the Arduino instead of remoting the pin values and states.

The reason the PC is involved at all is because we need the higher-level capabilities offered by a Windows 10 PC to communicate with the bot, do voice recognition, etc. You could also do these on a higher level IoT Core device like the Intel Joule.

Remote wiring is an excellent way to prototype a solution from the comfort of your PC. It’s also very useful when you’re trying to decide what capabilities you’ll ultimately need in the final target IoT board. The API is very similar to the Windows.Devices.Gpio APIs, so moving to Windows 10 IoT Core when moving to production is not very difficult at all.

For my project, I used a very long USB cable. I didn’t want to mess around with Bluetooth at a live event.

To initialize the Arduino connection in this project, I used this code in my C# standard Windows 10 UWP app:


RemoteDevice _arduino;
UsbSerial _serial;

private const string _vid = "VID_2341";
private const string _pid = "PID_0043";


private void InitializeWiring
{
    _serial = new UsbSerial(_vid, _pid);
    _arduino = new RemoteDevice(_serial);

    _serial.ConnectionEstablished += OnSerialConnectionEstablished;

    _serial.begin(57600, SerialConfig.SERIAL_8N1);
}

I got the VID and PID from looking in the Device Manager properties for the connected Arduino. Super simple, right? I found everything I needed in our tutorial files and documentation.

The final step for Arduino setup is to set the pin modes. This is done in the handler for the ConnectionEstablished event.


private void OnSerialConnectionEstablished()
{

    //_arduino.pinMode(_sdiPin, PinMode.I2C);
    _arduino.pinMode(_sdiPin, PinMode.OUTPUT);
    _arduino.pinMode(_clockPin, PinMode.OUTPUT);
    _arduino.pinMode(_latchPin, PinMode.OUTPUT);
    _arduino.pinMode(_outputEnablePin, PinMode.OUTPUT);

    _arduino.digitalWrite(_outputEnablePin, PinState.HIGH); // turn off all LEDs

    ClearBoard(); // clear out the registers
}

private const UInt32 _clearValue = 0x0;        
private async void ClearBoard()
{
    // clear it out
    await SendUInt32Async(_clearValue, 0);

}

The SendUInt32Async method will be explained in a bit. For now, it’s sufficient to know that it is what lights up the LEDs. Now to work on the electronics part of the project.

Arduino connection to the LCD sink ICs

There are a number of good ways to drive the LEDs using everything from specialized drivers to transistors to various types of two dimensional arrays (a 5×6 array would do it, and require 11 IO pins). I decided to make it super simple and dev board-agnostic and use the MBI5026GN LED driver chip, purchased from Evil Mad Scientist. A single MBI5026 will sink current from 16 LEDs. To do a full alphabet of 26 letters, I used two of these.

The MBI5026 is very simple to use. It’s basically a souped-up shift register with above-average constant current sinking abilities. I connected the LED cathodes (negative side) to the pins and the anode (positive side) to positive voltage. To turn on an LED, just send a high value (1) for that pin.

So for 16 pins with pins 0 through 5 and 12 and 15 turned on, that means that we would send a set of high/low values that looks like this:

image6

The MBI5026 data sheet explains how to pulse the clock signal so it knows when to read each value. There are a couple other pins involved in the transfer, which are also documented in the data sheet.

The IC also includes a pin for shifting out bits that are overflowing from its 16 positions. In this way, you can chain as many of these together as you want. In my case, I chained together two and always passed in 32 bits of data. That’s why I used a UInt32 in the above code.

In this app, I’ll only ever turn on a single LED at a time. So every value sent over will be a single bit turned on with the other thirty-one bits turned off. (This also makes it easier to get away with not worrying about the amp draw from the LEDs.)

To make mapping letters to the 32-bit value easier, I created an array of 32-bit numbers in the app and stored them as the character table for the wall. Although I followed alphabetic order when connecting them, this table approach also supports arbitrary connections of the LEDs as long as you keep alphabetical the actual order for the values in the array.


private UInt32[] _letterTable = new UInt32[]
{
    0x80000000, // A 10000000000000000000000000000000 binary
    0x40000000, // B 01000000000000000000000000000000
    0x20000000, // C 00100000000000000000000000000000
    0x10000000, // D ...
    0x08000000, // E
    0x04000000, // F
    0x02000000, // G
    0x01000000, // H
    0x00800000, // I
    0x00400000, // J
    0x00200000, // K
    0x00100000, // L
    0x00080000, // M
    0x00040000, // N
    0x00020000, // O
    0x00010000, // P
    0x00008000, // Q
    0x00004000, // R
    0x00002000, // S
    0x00001000, // T
    0x00000800, // U
    0x00000400, // V
    0x00000200, // W ...
    0x00000100, // X 00000000000000000000000100000000
    0x00000080, // Y 00000000000000000000000010000000
    0x00000040, // Z 00000000000000000000000001000000
};

These numbers will be sent to the LED sink ICs, LSB (Least Significant Bit) first. In the case of the letter A, that means the bit to turn on the letter A will be the very last bit sent over in the message. That bit maps to the first pin on the first IC.

LEDs require resistors to limit current and keep from burning out. There are a number of scientifically valid approaches to testing the LED lights and figuring out which resistor size to use. I didn’t use any of them, and instead opted to burn out LEDs until I found a reasonable value. 🙂

In reality, with the low voltage we’re using, you can get close using any online resistor value calculator and the default values. We’re not trying to maximize output here and the values would normally be different from color to color (especially blue and white vs. orange and red), in any case. A few hundred ohms works well enough.

Do note that the way the MBI5026 handles the resistor and sets the constant current is slightly different from what you might normally use. One resister is shared for all 16 LEDs and the driver is a constant current driver. The formula is given on page 9 of the datasheet.

IOUT = (VR-EXT / Rext ) x 15

But again, we’re only lighting one LED at a time and we’re not looking to maximize performance or brightness here. Additionally, we’re not using 16 LEDs at once. And, as said above, we also don’t know the actual forward current or forward voltage of the LEDs we’re using. If you want to be completely correct, you could have a different sink driver for each unique LED color, figure out the forward voltage and the correct resistor value, and then plug that in to the appropriate driver.

With that information at hand, it’s time to wire up the breadboard. Assuming I didn’t forget any, here’s the list of all the connections.

image7

Or if you prefer something more visual:

image8

I handled the wiring in two stages. In stage one, I wired the MBI5026 breadboard to the individual posts for each letter. This let me do all that fiddly work at my desk instead of directly on the wall. I used simple construction screws (which I had tested for conductivity) as posts to wire to.

You can see the result here, mounted on the back of the wall.

image9

You can see the individual brown wires going from each of the output pins on the pair of MBI5026 ICs directly to the letter posts. I simply wrapped the wire around the post; there is no solder or hot glue involved there. If you decide to solder the wires, use caution and be advised that the screws will sink a lot of the heat, likely to end up scorching the paper label and burning down all your hard work. The wire wrapped approach is easier and also easily repaired. It also avoids fire. Fire = bad.

The board I put everything on ended up being a bit large to fit between the rows on the back of the wall, so I took the whole thing over to the table saw. I’m the first person I know to take an Arduino, breadboard and wired circuit, and run it across a saw. It survived. 🙂

image10

In the Windows app, I wanted to make sure the code would allow taking an arbitrary string as input and would light up the LEDs in the right order. First, the code that processes the string:


public async Task RenderTextAsync(string message, 
             int onDurationMs = 500, int delayMs = 0, 
             int whitespacePauseMs = 500)
{
    message = message.ToUpper().Trim();

    byte[] asciiValues = Encoding.ASCII.GetBytes(message);

    int asciiA = Encoding.ASCII.GetBytes("A")[0];

    for (int i = 0; i < message.Length; i++)
    {
        char ch = message[i];

        if (char.IsWhiteSpace(ch))
        {
            // pause
            if (whitespacePauseMs > 0)
                await Task.Delay(whitespacePauseMs);
        }
        else if (char.IsLetter(ch))
        {
            byte val = asciiValues[i];
            int ledIndex = val - asciiA;

            UInt32 bitmap = _letterTable[ledIndex];

            // send the letter
            await SendUInt32Async(bitmap, onDurationMs);

            // clear it out
            await SendUInt32Async(_clearValue, 0);

            if (delayMs > 0)
                await Task.Delay(delayMs);

        }
        else
        {
            // unsupported character. Ignore
        }
    }
}

The code first gets the ASCII value for each character in the string. Then, for each character in the string, it checks to see if it’s whitespace or a letter. If neither, it is ignored. If whitespace, we delay for a specified period of time. If a letter, we look up the appropriate letter 32-bit value (a bitmap with a single bit turned on), and then send that bitmap to the LEDs, LSB first.

The code to send the 32-bit map is shown here:


private const int _latchPin = 7;            // LE
private const int _outputEnablePin = 8;     // OE
private const int _sdiPin = 3;              // SDI
private const int _clockPin = 4;            // CLK

// send 32 bits out by bit-banging them with a software clock
private async Task SendUInt32Async(UInt32 bitmap, int outputDurationMs)
{
    for (int i = 0; i < 32; i++)
    {
        // clock low
        _arduino.digitalWrite(_clockPin, PinState.LOW);

        // get the next bit to send
        var b = bitmap & 0x01;

        if (b > 0)
        {
            // send 1 value

            _arduino.digitalWrite(_sdiPin, PinState.HIGH);
        }
        else
        {
            // send 0 value
            _arduino.digitalWrite(_sdiPin, PinState.LOW);
        }

        // clock high
        _arduino.digitalWrite(_clockPin, PinState.HIGH);

        await Task.Delay(1);    // this is an enormous amount of time, 
                                // of course. There are faster timers/delays 
                                // you can use.

        // shift the bitmap to prep for getting the next bit
        bitmap >>= 1;
    }

    // latch
    _arduino.digitalWrite(_latchPin, PinState.HIGH);
    await Task.Delay(1);
    _arduino.digitalWrite(_latchPin, PinState.LOW);
            
    // turn on LEDs
    _arduino.digitalWrite(_outputEnablePin, PinState.LOW);

    // keep the LEDs on for the specified duration
    if (outputDurationMs > 0)
        await Task.Delay(outputDurationMs);

    // turn the LEDs off
    _arduino.digitalWrite(_outputEnablePin, PinState.HIGH);
}

This is bit-banging a shift register over USB, to an Arduino. No, it’s not fast, but it doesn’t matter at all for our use here.

The MBI5026 Data Sheet includes the timing diagram I used when figuring out how to send the clock signals and data. Note that the actual period of these clock pulses isn’t important, it’s the relative timing/order of the signals that counts. The MBI5026 can be clocked at up to 25MHz.

image11

Using that information, I was able to prototype using regular old LEDs on a breadboard. I didn’t do all 26, but I did a couple at the beginning and a couple at the end to ensure I didn’t have any off-by-one errors or similar.

Next, I needed to scale it up to a real wall. We’ll cover that in the next post, before we finish with some speech recognition and natural language processing.

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on twitter @pete_brown

Most of all, thanks for reading!

In Case You Missed It – This Week in Windows Developer

You got the power this week with Windows Developer. From flight control and IoT kit updates, to the magic of camera APIs, read on to learn about the new capabilities to deliver more control over your apps.

Package rollout power

You might have streamlined your app management with the rollout of the Windows Submission API earlier this year. Now, we’ve released two new features for the API that give you more power over package rollouts. Click below to learn more.

IoT for you and me

Adafruit and Seeed are our latest partners bringing IoT to all developers with easy-to-use kits. Adafruit’s Windows 10 IoT Core Starter Kit not only gets you started quickly, but the new version also includes an upgrade to the new Rasperry Pi 3. Seeed Studio’s Grove Starter Kit for IoT is also based on Raspberry Pi, and it builds on the great design work that Seeed and their partner Dexter Industries have done around the Grove connector. Click through to get the latest on the new kits.

Magic with camera APIs

Almost any device you pick up these days has a camera in it, unlocking real time opportunities like never before. So take advantage! Open up a world of opportunities for your app and end users with the magic of camera API development skills, outlined in our latest App Dev on Xbox blog post. Click through for the walk-through.

Download Visual Studio to get started!

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

In Case You Missed It – This Week in Windows Developer

Before we get into all of last week’s updates, check out this highlight. (It’s a big deal):

Cross device experiences and Project Rome

We read the morning news on our tablets, check email during the morning commute on our phones and use our desktop PCs when at work. At night, we watch movies on our home media consoles. The Windows platform targets devices ranging from desktop PCs, laptops and smartphones, to large-screen hubs, HoloLens, wearable devices, IoT and Xbox.

With all of these devices playing important roles in the daily lives of users, it’s easy to think about apps in a device-centric bubble. This blog explains how to make your UX more human-centric instead of device centric to create the best possible UX.

Building Augmented Reality Apps in five Steps

Augmented reality is really cool (and surprisingly easy). We outline how to create an augmented reality app in five steps. Take a look at the boat!

BUILD 14946 for PC and Mobile

In our latest build we’ve got customized precision touchpad experiences, separated screen time-out settings, updated Wi-Fi settings for PC and Mobile and an important note about a chance to automatic updates.

Announcing Windows 10 Insider Preview Build 14946 for PC and Mobile

IoT on Xbox One: Best for You Sample Fitness App

Best For You is a sample fitness UWP app focused on collecting data from a fictional IoT enabled yoga wear and presenting it to the user in a meaningful and helpful way on all of their devices to track health and progress of exercise. In this post, we will be focusing on the IoT side of the Universal Windows Platform as well as Azure IoT Hub and how they work together to create an end-to-end IoT solution.”

Also, the sample code is on GitHub. Click below to get started!

Narwhals (and AppURI Handlers)

‘Nuf said.

Download Visual Studio to get started!

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Introducing New Remote Sensing Features

In a previous blog, we discussed the new contextual sensing features and innovations we added to the sensor platform. In today’s blog, we will take a moment to understand remote sensing and then take a peek at the new and upcoming projects related to it. Namely, OpenT2T and Windows IoT Remote Client.

What is a remote sensor?

A remote sensor is a sensor that’s usually not on the device running the application. This device may or may not be running Windows and may not even have a display or a user interface.

There are primarily two flavors when it comes to remote sensors:

  1. Remote sensors that are on Things with their apps on Windows devices. For example, we have a multitude of Things in smart homes and fitness gadgets all around us, and these devices have sensors for things like temperature, humidity, air quality, heart rate and blood pressure, just to name a few. In this case, the sensors are on these Things, but the app itself may be on your phone or desktop device.
  2. Remote sensors that are on Windows devices with their apps on Things. If you have a Thing that does not run an operating system or does not have sensors, then in this case, you are remotely using the sensors that are on your phone or your tablet to provide inputs/data to your Things.

OpenT2T for remote sensing

One of the problems we have been attempting to solve over the last several months has been how to connect to the plethora of Things appearing everywhere. Sensors are part of several of these Things, especially in the health and fitness and smart home areas. As these Things use multiple communication protocols—like Bluetooth, Zigbee, Zwave, et cetera—it has been challenging for both sensor manufacturers and app developers to build a solution for interaction.

With that in mind, I’m very excited about the OpenT2T initiative, which is an open source project announced very recently at Build. This solution allows application developers to build solutions to easily interact with the Things around them, like heart monitors or light bulbs, without worrying about the communication protocol or interoperability issues.

We will take a moment to look specifically at the translator for a heart rate monitor, developed using OpenT2T, as well as the experience of working on it.

Note: You can check out other translators in the GitHub repository.

Below, you will find some of the key OpenT2T learnings we encountered while building the heart rate monitor translator. All software components of OpenT2T are written in Javascript using Node.js framework.

Schema

A schema is used to define the basic methods of your Thing. How you define your own schema is entirely up to you. In the case of this project, we defined the schema to be for a heart rate monitor.

AllJoyn exposes a sensor object on the bus with the same schema of the Translator itself. AllJoyn knows what the schema of the translator is from the schema.xml file. The schema.xml file mirrors the methods that ThingsTranslator.js implements in JavaScript. Check out the sample code below. You’ll see we defined the method to get the current beats per minute. As you can see, this is standard introspection XML and very easy to define:

<node>
   <interface name="com.CrowdSourced.SuperPopular.HeartRateSensor">    
   <!-- Get the current beats per minute -->
      <method name="getBeatsPerMinute">    
         <arg name="beatsPerMinute" type="d" direction="out" />  
      </method>  
   </interface>
</node>

ThingsTranslator.js

Translators translate and help you connect a schema and a Thing, such as (in this case) a heart rate monitor. The schema is exposed to developers, and they no longer need to worry about how to communicate with that specific Thing for which the schema is defined. This is a cross-platform node module that implements the schema. Sample code that shows the two methods defined for the heart rate monitor is as below:

module.exports = {     
   initDevice: function(device) {    
     console.log('Heart Rate Translator initialized.');    
     // additional work
  },   
  getBeatsPerMinute: function() {
     console.log(‘getBeatsPerMinute called.');  
     // additional work
  }
};

Manifest.xml

The manifest.xml file defines the key features of a translator, such as what schema it implements and what onboarding mechanism should be used to set up the sensor. This is essential information that the OpenT2T framework needs to know in order to operate the translator. Sample code for a heart rate sensor of the brand Polar H7 is shown below:


<?xml version="1.0" encoding="utf-8"?>
<manifest>
   <schema id="org.OpenT2T.Sample.SuperPopular.HeartRate" /> 
   <onboarding id="org.OpenT2T.Onboarding.BluetoothLE">
      <arg name="name" value="Polar H7" />  
      <arg name="advertisementLocalNameFilter" value="^Polar H7*" />
   </onboarding>
</manifest>

Package.json

When a particular translator is loaded to communicate with a sensor, it may require other software libraries to delegate some work. For example, a Bluetooth heart rate sensor may use a Bluetooth library to handle all Bluetooth communications. Package.json is used to define other software libraries that may be needed.

Overall, it was very easy to get started and develop the heart rate translator using OpenT2T, so we recommend taking a look and contributing to this new open source project if you want to create apps that interact with Things.

Note: We would also like to thank Victor Vuong, our intern from University of Waterloo, for his contributions on Remote Sensing investigations and the heart rate Translator.

Windows IoT Remote Client

Windows IoT Remote Client has been recently introduced to remotely control Universal Windows Platform (UWP) apps running on Window 10 IoT Core devices. Remote control can be established from any Windows 10 desktop PC, tablet, or phone, putting a display on a device without one.

Check out this video demonstrating the technology. As you can see, an accelerometer on the tablet is used to remotely control a Thing that is a Windows IoT Core device. The code you have to write for accessing the sensors remotely is exactly the same code used to implement an on-device local sensor. Code samples can be found here.

Users connect to their Windows 10 IoT Core devices through a Microsoft Store application installed on their Windows 10 companion device of choice. The UI generated by the UWP application running on the Windows 10 IoT Core device is remoted to the display of the companion device, while input and sensor data are sent in the opposite direction.

Image1

The functionality is easy to use and included out-of-box on the latest Insider Build of Windows 10 IoT Core. For more information, please see here. You can also get the Windows IoT Remote Client app here.

In addition to these different ways of sensing remotely, check out the Build 2016 demo on “How to Train your Robot,” which uses activity sensors on the phone to move a Lego robot directly using Bluetooth LE.

If you have any questions, bugs, or issues, please let me know or use the Windows Feedback tool or MSDN forums. If you would like us to add any new features, please submit them at UserVoice or provide as comments below.

Download Visual Studio to get started.

Written by Rinku Sreedhar, Program Manager for Windows Contextual Sensing.

Building an IoT Magic Mirror with Hosted Web Apps and Windows 10

At Build 2016, we demoed a Magic Mirror project powered by a Hosted Web App on Windows 10 IoT Core. This project enhances the basic concept of a “smart” magic mirror concept by personalizing the experience with relevant information and facial recognition powered by Microsoft’s Cognitive Services APIs.

This demo illustrates how Hosted Web Apps in Windows 10 can leverage familiar web technologies to deliver powerful app experiences to all devices, including the Internet of Things. In this post, we’ll walk you through how we went about developing the mirror and how to build one for yourself!

What’s a Magic Mirror?

Our magic mirror is basically a one-way mirror (like you might have seen in Hollywood depictions of interrogation rooms), made “smart” by a simple LCD display which sits behind the mirror and displays white UI elements with a black background. When the display is on, you can see both your reflection and the white elements, allowing software to present relevant information while you get ready for the day.

Mockup showing the Magic Mirror installed in a bathroom.

We designed the mirror to adapt to each person and to work without getting in the way of their daily routine.

We designed the Magic Mirror to be low-cost and simple, so anyone could build it in a couple of hours. We’ve also open-sourced the web app and shared our bill of materials and assembly instructions on our GitHub repository.

To power the mirror, we chose a Raspberry Pi because of its popularity, price point, support, and hardware specs. Our web app, which provides the interface and basic functionality, is a simple Hosted Web App that runs on Windows 10 IoT Core.

Building the interface

We designed the mirror user interface (UI) to be as functional as possible, as both a mirror and an info hub. There are some practical implications to this.  The UI should be simple and easy to visually digest, so we kept adornment light and typography clear. The screen needs to be readable through the mirrored surface, so we used a high contrast ratio of pure white on pure black. Lastly and most importantly, the user needs to see their reflection, so we kept the central area of the mirror clear when the user is logged in.

Annotated screen capture showing the Magic Mirror interface, which is a simple white-on-black screen with relevant information like the current time, weather, and upcoming appointments around the periphery.

The mirror is built to be useful to a person getting ready in the morning. This person is likely on a time crunch, wants to be well-prepared for the day, and is interested in updates, but possibly doesn’t want to be barraged with info before they’re fully awake. With that in mind, we placed more-pressing information (weather, time, and a space reserved for alerts) at the top of the mirror near eye-level, and pushed less-urgent information down at the bottom, where it can be ignored or consciously consumed. Every user will have a slightly different idea about what’s most important, so this is a great project for exploring personalization through tech.

Building the web app

There are multiple pieces at play here. First is what you see displayed behind the mirror: This is a web app created in HTML, CSS, and JavaScript and served from a Node instance hosted on Azure.

Architecture diagram for the Magic Mirror web app, which is served from Azure and powered by the MIcrosoft Cognitive Services APIs. The app runs on a Raspberry Pi 3 on Windows 10 IoT Core.

Using the Hosted Web apps bridge, we turned our web app into a Universal Windows App, which not only give us access to Windows Native APIs but can also run across Windows devices, such as the Raspberry Pi 3 in our case. All the HTML, CSS, and JavaScript comes directly from the server, hence the term hosted.

Making it smart

The most important part of the app and the most delightful experience for the user is the facial recognition capability, which personalizes the mirror’s display based on the individual in front of it. In the past, this was complex technology out of the reach of most web apps, but, with APIs provided by Microsoft’s Cognitive Services, we’re able to build it into our mirror with minimal effort.

Magic Mirror leverages Microsoft’s Cognitive Services Face API to match the user’s face to their profile. The user creates a profile by adding some personal info and taking a selfie, which is then sent to Cognitive Services to get a unique identifier (a face_id) which is then stored in the Magic Mirror’s database.

Once they’ve created a profile, the user can stand in front of the magic mirror, which will take a picture and request Cognitive Services for the user’s face_id. This id is then used to find the user’s profile so the mirror can present the user with relevant info.

Below you can see how our Node instance sends an image as an octet-stream to Microsoft Cognitive Services through their REST API. The Cognitive Services’ cloud then sends back a face_id, which we bind to our user object.

.gist table { margin-bottom: 0; }

We were very conscious about not wasting resources (CPU cycles, bandwidth, etc). For example, we didn’t want to send every frame to the Cognitive Services API, since most frames don’t have a person in them. To solve this, we used the facedetected event to send images to the Cognitive Services servers only when a face was detected. This event is available to the app since Hosted Web apps can access WinRT APIs through JavaScript.

In the code below, you can see the how we add the listener for the facedetected event once the stream is complete.

.gist table { margin-bottom: 0; }

Try it out

This is just a small sample of what’s possible with the Hosted Web Apps platform and Cognitive Services APIs, but it’s a great introduction to how Hosted Web Apps on Windows 10 allow you to target the full range of Windows 10 devices, including the Internet of Things, to create compelling experiences with familiar web technologies. We’ve open sourced the application source code and bill of materials on GitHub – try it out for yourself and let us know what you think!

– Andy Pavia, Program Manager, Microsoft Edge

Network 3D Printing with Windows 10 IoT Core!

1_raspberry

Since Windows 8.1, Microsoft has been providing native support for 3D printers with most popular printers already enabled via USB plug and play (see this full list of supported printers).

Today, we have added a new Windows 10 IoT Core sample app, “Network 3D Printer” that adds support for an even wider range of 3D printers and allows you to access them over your network.  Multiple Windows computers on your network can even share the same 3D printer.

Raspberry Pi enthusiasts can use this solution starting today to network enable their 3D printers and we invite device manufacturers to evaluate the experience that this enables and the benefits of being able to easily Wi-Fi enable their devices and connect them to Windows.

In this initial release, we’ve added network (both Wi-Fi and wired) and Windows 3D print platform support for more than a dozen well-known and brand new evolutions of 3D Printers:

  • Lulzbot Taz 6
  • Makergear M2
  • Printrbot Play, Plus and Simple
  • Prusa i3 and i3 Mk2
  • Ultimaker Original and Original+
  • Ultimaker 2 and 2+
  • Ultimaker 2 Extended and Extended+

And we’ve made it as easy as possible to set up your Windows 10 IoT Core powered device for network 3D printing.

2_raspberry

Once you have the Network 3D Printer UWP app is running on your Windows 10 IoT Core device, it will broadcast its presence on the network, and anyone connected can easily add it using Windows 10 Settings, same as you would with any other network device.

3_raspberry

From there, after the 3D printer is added to Windows, you can print 3D objects using any 3D printing app, like Microsoft 3D Builder.

4_raspberryMicrosoft Raspberry Pi Case available in the 3D Builder Catalog of models.

This solution also creates a pathway for 3D printer manufacturers to utilize Windows 10 IoT Core and the Network 3D Printer app directly within their devices to add Windows-compatible network and other future features.

Adding Network 3D Printer support for an additional printer models is as simple as creating a profile for the device.

We are excited to work with our ecosystem partners and users to grow the list of supported printers!  If you wish for a specific 3D Printer to be part of the Windows ecosystem, please drop us a line at ask3dprint@microsoft.com.

Whether you are printing your latest creation or building your own 3D Printer, Windows 10 provides the best 3D printing experience. Grab a Windows 10 IoT Core powered device, like a Raspberry Pi, and a 3D printer and give it a try! We can’t wait to hear what you build with it!