Tag Archives: Unity

Silver Peak SD-WAN Unity EdgeConnect growing up

Silver Peak has introduced features to its SD-WAN product, Unity EdgeConnect, that address some of the needs of enterprises waiting for more mature software for distributing network traffic across a WAN.

Unity EdgeConnect enhancements unveiled this week included five 9s reliability and a cloud-based version of Silver Peak’s traffic orchestration application. Also, the vendor dropped its extra charge for optimizing traffic flows to online business software connected to the Silver Peak SD-WAN.

The SD-WAN market is small but on the rise. Revenue from the technology reached $78 million in the second quarter, an increase of 33% year over year, according to the latest numbers from IHS Markit, based in London.

To take the market beyond early adopters, SD-WAN vendors are improving reliability and simplifying functionality, said Brad Casemore, an analyst at IDC. “This is an evolutionary process in the SD-WAN market, and as vendors engage with a broader set of customers, they’ll all have to work to adapt their offerings accordingly.”

IDC’s SD-WAN forecast — which includes both infrastructure and managed services revenues and not just sales from the technology itself — projects the segment will grow at a compound annual growth rate of 69% through 2021.

Silver Peak SD-WAN backup

In adapting its product, Silver Peak introduced a higher availability option that amounts to deploying two EdgeConnect appliances and then connecting them, so they operate as a single SD-WAN. The expense of adding an SD-WAN appliance is offset by more efficient traffic routing and protection against hardware, software or transport failures, according to the company.

Silver Peak uses the virtual router redundancy protocol to move local area network (LAN) traffic to a designated master SD-WAN while the second serves as a backup. At the same time, the appliances move traffic in tandem through a 1 or 10 GbE link that connects them.

For example, an enterprise could connect the master SD-WAN to an MPLS link that handles the most latency-sensitive traffic, such as voice or video calls. Less sensitive traffic, such as from a guest Wi-Fi, could be redirected to the second SD-WAN, which would send it over a cheaper broadband connection. The spare SD-WAN could also be tied to a 4G LTE connection as a backup when other links fail.

Cloud-based administration for Silver Peak SD-WAN

Other new features include an easier way for companies to use Silver Peak’s EdgeConnect Orchestrator. Rather than deploy the SD-WAN configuration and management software in-house, enterprises now have the option of subscribing to a cloud-based version.

Finally, Silver Peak dropped the $5,000 it charged annually to companies that used EdgeConnect to send traffic from branch offices to a software-as-a-service application, such as Box, Microsoft Office 365, Salesforce or Workday. In that scenario, EdgeConnect would calculate the closest service provider data center to ensure maximum performance.

Silver Peak has more than 500 SD-WAN customers, according to the vendor, but is not among the top nine providers listed in the latest quarterly IHS report. VeloCloud led the pack with $18 million in revenue from appliances, plus control and management software, followed by Viptela, $7.7 million; Talari, $3.8 million; Citrix and TELoIP, tied at $2.7 million; and Cisco, $2.6 million. Rounding out the list were FatPipe, CloudGenix and Riverbed, respectively. In August, Cisco acquired Viptela for $610 million.

SD-WAN provider Versa Networks, meantime, said it will integrate its Cloud IP Platform into CA Technologies’ Network Operations and Analytics network management software. The combination will let companies use a single console to monitor performance across their entire SD-WAN and network infrastructures, the companies said.

Calling all game devs: The Dream.Build.Play 2017 Challenge is Here!

Dream.Build.Play is back! The long-running indie game development contest was on hiatus for a few years, so it’s high time for it to make a resounding return.

The Dream.Build.Play 2017 Challenge is the new contest: It just launched on June 27, and it challenges you to build a game and submit it by December 31 in one of four categories. We’re not super-picky — you can choose the technology to use just so long as it falls into one of the challenges and that you publish it as a Universal Windows Platform (UWP) game. It’s up to you to build a quality game that people will line up to play.

The four categories are:

Cloud-powered game – Grand Prize: $100,000 USD

Azure Cloud Services hands you a huge amount of back-end power and flexibility, and we think it’s cool (yes, we’re biased). So, here’s your shot of trying Azure out and maybe even win big. Build a game that uses Azure Cloud Services on the backend, like Service Fabric, CosmosDB, containers, VMs, storage and Analytics. Judges will give higher scores to games that use multiple services in creative ways — and will award bonus points for Mixer integration.

PC game – Grand Prize: $50,000 USD

Building on Windows 10, for Windows 10? This is the category for you. Create your best UWP game that lives and breathes on Windows 10 and is available to the more than 450 million users through the Windows Store. It’s simple: Create a game with whatever technology you want and publish it in the Windows Store. We’ll look favorably on games that add Windows 10 features such as Cortana or Inking because we really want to challenge you.

Mixed Reality game – Grand Prize: $50,000

Oh, so you want to enhance this world you live in with something a little…augmented? Virtual? Join us in the Mixed Reality challenge and build a volumetric experience that takes advantage of 3D content in a virtual space. You’ll need to create your game for Windows Mixed Reality, but you can use technology like Unity to get you kickstarted. Oh, and don’t forget the audio to really immerse us in your world.

Console game – Grand Prize: $25,000

Console gamers unite! Want to try your hand at building a game for Xbox? This category is your jam. Your UWP game will be built for the Xbox One console family and must incorporate Xbox Live Creators Program with at least Xbox Live presence. Consideration will be given for games that incorporate more Xbox Live services such as leaderboards and statistics.

There are some important dates to be aware of:

  • June 27: Competition opens for registration
  • August 2: Team formation and game submission period opens
  • December 31: Game submission period closes
  • January 2018: Finalists announced
  • March 2018: Winners awarded

We have big things planned for you. Maybe some additional contests and challenges, maybe some extra-cool prizes for the finalists, maybe some extra-cool interviews and educational materials. Once you register, we’ll keep you updated via email, but also keep an eye on our Windows Developer social media accounts.

As I mentioned earlier, you can pretty much use whatever technology you want. Create something from the ground up in JavaScript or XAML or C++ and DirectX. Leverage one of our great middleware partners like Unity, GameMaker, Cocos2D or Monogame. Or do a bit of both – do your own thing and incorporate Mixer APIs into it, Vungle or any one (or more) of our other partners. The biggest thing we want from you is a fun game that’s so enjoyable for us to play that we forget we’re judging it!

Speaking of that, you might be wondering how we judge the games. We have four “big bucket” criteria for you to aim for:

  • Fun Factor – 40%: Bottom line – your game needs to be fun. That doesn’t mean it has to be cutesy or simple. Fun comes in many forms, but we can’t forget what we’re aiming for here – a great game. Take us for a ride!
  • Innovation – 30%: And while you’re taking us on that ride, surprise us! We’re not looking for a clone of an existing game or a tired theme that has been done a bazillion times before. Mash-up two genres. Take a theme and turn it on its head. Don’t restrict your innovation to the game, but also the technology you’re using and how you’re using it. Think outside the box when you incorporate Windows features, or how you can creatively use a service like Mixer.
  • Production Quality – 20%: Games have to be fun and we want them to be innovative, but if they don’t run, then they’re just not ready to be called a game. This scoring criterion is all about making sure your framerate is right, you have audio where you should, you’ve catered for network instability and more. Give us every opportunity to get to your game and enjoy it the way you intended.
  • Business Viability/Feasibility – 10%: And of course, what’s your plan to engage your gaming customers? Do you have a good revenue-generating plan (e.g., in-app purchases, premium charges, marketing, rollouts, etc.)? That’s stuff you might not normally think about, but we’re gonna make you. Because we care.

If you want to get started with UWP game development, you can try our Game Development Guide.

Want more? Check out the introductory .GAME episode here:

So, what are you waiting for? Get in there and register!

Windows Mixed Reality Dev Kits available for pre-order

Anyone following the excitement around virtual reality and augmented reality over the past year is aware of the anticipation surrounding Microsoft’s new Windows Mixed Reality headsets. You understand that a rapidly expanding mixed reality market is just waiting for developers like you to get involved. During Alex Kipman’s Build keynote, we announced that Windows Mixed Reality dev kits from Acer and HP are now available for pre-order through the Microsoft Store for developers in the US (Acer, HP) and Canada (Acer, HP) —please sign up here so we can notify you once dev kits are available in additional countries.

The Acer Windows Mixed Reality Headset Developer Edition is priced at $299 USD and the HP Windows Mixed Reality Headset Developer Edition is priced at $329 USD. The headsets use state-of-the-art, inside-out tracking so you don’t need to set up external cameras or IR emitters to have a truly immersive experience as you move with six degrees of freedom (6DoF) in mixed reality. You’ll be ready to code new mixed reality experiences out of box with a headset and a Windows 10 Creator’s Update PC that meets our recommended hardware specifications for developers. We invite developers to join the Windows Insider program to receive the latest mixed reality experiences from Microsoft each week.

Acer and HP built new mixed reality headsets with different industrial designs to capture the spirit of more personal computing and creativity in Windows. Developers can choose the bright and lightweight headset from Acer or the modern and industrial look of the HP headset with a common set of display and audio features across both headsets:

Acer Windows Mixed Reality Headset Developer Edition HP Windows Mixed Reality Headset Developer Edition
Pre-order in the US Pre-order in the US
Pre-order in Canada Pre-order in Canada
  • Two high-resolution liquid crystal displays at 1440 x 1440
  • Front hinged display
  • 95 degree horizontal field of view
  • Display refresh rate up to 90 Hz (native)
  • Built-in audio out and microphone support through 3.5mm jack
  • Single cable with HDMI 2.0 (display) and USB 3.0 (data) for connectivity
  • Inside-out tracking
  • 4.0 meter cable
  • Two high-resolution liquid crystal displays at 1440 x 1440
  • Front hinged display
  • 95 degrees horizontal field of view
  • Display refresh rate up to 90 Hz (native)
  • Built-in audio out and microphone support through 3.5mm jack
  • Single cable with HDMI 2.0 (display) and USB 3.0 (data) for connectivity
  • Inside-out tracking
  • 4.0m/0.6m removable cable
  • Double-padded headband and easy adjustment knob for all day comfort

As a developer, you can start preparing your machine to build immersive experiences today. Visit the Windows Dev Center to view documentation, download tools and join the emerging community of Windows Mixed Reality developers. Download Unity 3D, the most widely-used developer platform for creating immersive applications. Also download the free Visual Studio 2017 Community edition to package and deploy your immersive apps to the Windows Store. Additionally, you should check to make sure your workstation meets the recommended specifications for developers:

 System Recommendations for App Developers


  • Desktop: Intel Desktop Core i7 (6+ Core) OR AMD Ryzen 7 1700 (8 Core, 16 threads)


  • Desktop: NVIDIA GTX 980/1060, AMD Radeon RX 480 (8GB) equivalent or greater DX12 and WDDM 2.2 capable GPU
  • Drivers: Windows Display Driver Model (WDDM) 2.2
  • Thermal Design Power: 15W or greater


  • Headset connectors: 1x available graphics display port for headset (HDMI 1.4 or DisplayPort 1.2 for 60Hz headsets, HDMI 2.0 or DisplayPort 1.2 for 90Hz headsets)
  • Resolution: SVGA (800×600) or greater
  • Bit depth: 32 bits of color per pixel

Memory: 16 GB of RAM or greater

Storage: >10 GB additional free space


  • 1x available USB port for headset (USB 3.0 Type-A). USB must supply a minimum of 900mA.
  • Bluetooth 4.0 (for accessory connectivity)

The Windows Mixed Reality headsets are priced to lower the barriers to create immersive experiences. Mixed reality is now open to you as a developer—and if your Windows PC already meets the minimum specs, you don’t really need anything more to start building games and enterprise apps for the rapidly expanding mixed reality market. We can’t wait to see what you build!

Get all the updates for Windows Developers from Build 2017 here.

Building a Telepresence App with HoloLens and Kinect

When does the history of mixed reality start? There are lots of suggestions, but 1977 always shows up as a significant year. That’s the year millions of children – many of whom would one day become the captains of Silicon Valley – first experienced something they wouldn’t be able to name for another decade or so.

The plea of an intergalactic princess that set off a Star Wars film franchise still going strong today: “Help me Obi-wan Kenobi, you’re my only hope.” It’s a fascinating validation of Marshal McLuhan’s dictum that the medium is the message. While the content of Princess Leia’s message is what we have an emotional attachment to, it is the medium of the holographic projection – today we would call it “augmented reality” or “mixed reality” – that we remember most vividly.

While this post is not going to provide an end-to-end blueprint for your own Princess Leia hologram, it will provide an overview of the technical terrain, point out some of the technical hurdles and point you in the right direction. You’ll still have to do a lot of work, but if you are interested in building a telepresence app for the HoloLens, this post will help you get there.

An external camera and network connection

The HoloLens is equipped with inside-out cameras. In order to create a telepresence app, however, you are going to need a camera that can face you and take videos of you – in other words, an outside-in camera. This post is going to use the Kinect v2 as an outside-in camera because it is widely available, very powerful and works well with Unity. You may choose to use a different camera that provides the features you need, or even use a smartphone device.

The HoloLens does not allow third-party hardware to plug into its mini-USB port, so you will also need some sort of networking layer to facilitate inter-device communication. For this post, we’ll be using the HoloToolkit’s sharing service – again, because it is just really convenient to do so and even has a dropdown menu inside of the Unity IDE for starting the service. You could, however, build your own custom socket solution as Mike Taulty did or use the Sharing with UNET code in the HoloToolkit Examples, which uses a Unity provided networking layer.

In the long run, the two choices that will most affect your telepresence solution are what sort of outside-in cameras you plan to support and what sort of networking layer you are going to use. These two choices will determine the scalability and flexibility of your solution.

Using the HoloLens-Kinect project

Many telepresence HoloLens apps today depend in some way on Michelle Ma’s open-source HoloLens-Kinect project. The genius of the app is that it glues together two libraries, the Unity Pro plugin package for Kinect with the HoloToolkit sharing service, and uses them in unintended ways to arrive at a solution.

Even though the Kinect plugin for Unity doesn’t work in UWP (and the Kinect cannot be plugged into a HoloLens device in any case), it can still run when deployed to Windows or when running in the IDE (in which case it is using the .NET 3.5 framework rather than the .NET Core framework). The trick, then, is to run the Kinect integration in Windows and then send messages to the HoloLens over a wireless network to get Kinect and the device working together.

On the network side, the HoloToolkit’s sharing service is primarily used to sync world anchors between different devices. It also requires that a service be instantiated on a PC to act as a communication bus between different devices. The sharing service doesn’t have to be used as intended, however. Since the service is already running on a PC, it can also be used to communicate between just the PC and a single HoloLens device. Moreover, it can be used to send more than just world anchors – it can really be adapted to send any sort of primitive values – for instance, Kinect joint positions.

To use Ma’s code, you need two separate Unity projects: one for running on a desktop PC and the other for running on the HoloLens. You will add the Kinect plugin package to the desktop app. You will add the sharing prefab from the HoloToolkit to both projects. In the app intended for the HoloLens, add the IP address of your machine to the Server Address field in the Sharing Stage component.

The two apps are largely identical. On the PC side, the app takes the body stream from the Kinect and sends the joint data to a script named BodyView.cs. BodyView creates spheres for each joint when it recognizes a new body and then repositions these joints whenever it gets updated Kinect.

private GameObject CreateBodyObject(ulong id)
    GameObject body = new GameObject("Body:" + id);
    for (int i = 0; i < 25; i++)
        GameObject jointObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);

        jointObj.transform.localScale = new Vector3(0.3f, 0.3f, 0.3f);
        jointObj.name = i.ToString();
        jointObj.transform.parent = body.transform;
    return body;

private void RefreshBodyObject(Vector3[] jointPositions, GameObject bodyObj)
    for (int i = 0; i < 25; i++)
        Vector3 jointPos = jointPositions[i];

        Transform jointObj = bodyObj.transform.FindChild(i.ToString());
        jointObj.localPosition = jointPos;

As this is happening, another script called BodySender.cs intercepts this data and sends it to the sharing service. On the HoloLens device, a script named BodyReceiver.cs gets this intercepted joint data and passes it to its own instance of the BodyView class that animates the dot man made up of sphere primitives.

The code used to adapt the sharing service for transmitting Kinect data is contained in Ma’s CustomMessages2 class, which is really just a straight copy of the CustomMessages class from the HoloToolkit sharing example with a small modification that allows joint data to be sent and received:

public void SendBodyData(ulong trackingID, Vector3[] bodyData)
    // If we are connected to a session, broadcast our info
    if (this.serverConnection != null && this.serverConnection.IsConnected())
        // Create an outgoing network message to contain all the info we want to send
        NetworkOutMessage msg = CreateMessage((byte)TestMessageID.BodyData);


        foreach (Vector3 jointPos in bodyData)
            AppendVector3(msg, jointPos);

        // Send the message as a broadcast

Moreover, once you understand how CustomMessages2 works, you can pretty much use it to send any kind of data you want.

Be one with The Force

Another thing the Kinect is very good at is gesture recognition. HoloLens currently supports a limited number of gestures and is constrained by what the inside-out cameras can see – mostly just your hands and fingers. You can use the Kinect-HoloLens integration above, however, to extend the HoloLens’ repertoire of gestures to include the user’s whole body.

For example, you can recognize when a user raises her hand above her head simply by comparing the relative positions of these two joints. Because this pose recognition only requires the joint data already transmitted by the sharing service and doesn’t need any additional Kinect data, it can be implemented completely on the receiver app running in the HoloLens.

private void DetectGesture(GameObject bodyObj)
    string HEAD = "3";
    string RIGHT_HAND = "11";

    // detect gesture involving the right hand and the head
    var head = bodyObj.transform.FindChild(HEAD);
    var rightHand = bodyObj.transform.FindChild(RIGHT_HAND);
    // if right hand is half a meter above head, do something
    if (rightHand.position.y > head.position.y + .5)

In this sample, a hidden item is shown whenever the pose is detected. It is then hidden again whenever the user lowers her right arm.

The Kinect v2 has a rich literature on building custom gestures and even provides a tool for recording and testing gestures called the Visual Gesture Builder that you can use to create unique HoloLens experiences. Keep in mind that while many gesture solutions can be run directly in the HoloLens, in some cases, you may need to run your gesture detection routines on your desktop and then notify your HoloLens app of special gestures through a further modified CustomMessages2 script.

As fun as dot man is to play with, he isn’t really that attractive. If you are using the Kinect for gesture recognition, you can simply hide him by commenting a lot of the code in BodyView. Another way to go, though, is to use your Kinect data to animate a 3D character in the HoloLens. This is commonly known as avateering.

Unfortunately, you cannot use joint positions for avateering. The relative sizes of a human being’s limbs are often not going to be the same as those on your 3D model, especially if you are trying to animate models of fantastic creatures rather than just humans, so the relative joint positions will not work out. Instead, you need to use the rotation data of each joint. Rotation data, in the Kinect, is represented by an odd mathematical entity known as a quaternion.


Quaternions are to 3D programming what midichlorians are to the Star Wars universe: They are essential, they are poorly understood, and when someone tries to explain what they are, it just makes everyone else unhappy.

The Unity IDE doesn’t actually use quaternions. Instead it uses rotations around the X, Y and Z axes (pitch, yaw and roll) when you manipulate objects in the Scene Viewer. These are also known as Euler angles.

There are a few problems with this, however. Using the IDE, if I try to rotate the arm of my character using the yellow drag line, it will actually rotate both the green axis and the red axis along with it. Somewhat more alarming, as I try to rotate along just one axis, the Inspector windows show that my rotation around the Z axis is also affecting the rotation around the X and Y axes. The rotation angles are actually interlocked in such a way that even the order in which you make changes to the X, Y and Z rotation angles will affect the final orientation of the object you are rotating. Another interesting feature of Euler angles is that they can sometimes end up in a state known as gimbal locking.

These are some of the reasons that avateering is done using quaternions rather than Euler angles. To better visualize how the Kinect uses quaternions, you can replace dot man’s sphere primitives with arrow models (there are lots you can find in the asset store). Then, grab the orientation for each joint, convert it to a quaternion type (quaternions have four fields rather than the three in Euler angles) and apply it to the rotation property of each arrow.

private static Quaternion GetQuaternionFromJointOrientation(Kinect.JointOrientation jointOrientation)
    return new Quaternion(jointOrientation.Orientation.X, jointOrientation.Orientation.Y, jointOrientation.Orientation.Z, jointOrientation.Orientation.W);
private void RefreshBodyObject(Vector3[] jointPositions, Quaternion[] quaternions, GameObject bodyObj)
    for (int i = 0; i < 25; i++)
        Vector3 jointPos = jointPositions[i];

        Transform jointObj = bodyObj.transform.FindChild(i.ToString());
        jointObj.localPosition = jointPos;
        jointObj.rotation = quaternions[i];

These small changes result in the arrow man below who will actually rotate and bend his arms as you do.

For avateering, you basically do the same thing, except that instead of mapping identical arrows to each rotation, you need to map specific body parts to these joint rotations. This post is using the male model from Vitruvius avateering tools, but you are welcome to use any properly rigged character.

Once the character limbs are mapped to joints, they can be updated in pretty much the same way arrow man was. You need to iterate through the joints, find the mapped GameObject, and apply the correct rotation.

private Dictionary<int, string> RigMap = new Dictionary<int, string>()
    {0, "SpineBase"},
    {1, "SpineBase/SpineMid"},
    {2, "SpineBase/SpineMid/Bone001/Bone002"},
    // etc ...
    {22, "SpineBase/SpineMid/Bone001/ShoulderRight/ElbowRight/WristRight/ThumbRight"},
    {23, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/HandLeft/HandTipLeft"},
    {24, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/ThumbLeft"}

private void RefreshModel(Quaternion[] rotations)
    for (int i = 0; i < 25; i++)
        if (RigMap.ContainsKey(i))
            Transform rigItem = _model.transform.FindChild(RigMap[i]);
            rigItem.rotation = rotations[i];

This is a fairly simplified example, and depending on your character rigging, you may need to apply additional transforms on each joint to get them to the expected positions. Also, if you need really professional results, you might want to look into using inverse kinematics for your avateering solution.

If you want to play with working code, you can clone Wavelength’s Project-Infrared repository on github; it provides a complete avateering sample using the HoloToolkit sharing service. If it looks familiar to you, this is because it happens to be based on Michelle Ma’s HoloLens-Kinect code.

Looking at point cloud data

To get even closer to the Princess Leia hologram message, we can use the Kinect sensor to send point cloud data. Point clouds are a way to represent depth information collected by the Kinect. Following the pattern established in the previous examples, you will need a way to turn Kinect depth data into a point cloud on the desktop app. After that, you will use shared services to send this data to the HoloLens. Finally, on the HoloLens, the data needs to be reformed as a 3D point cloud hologram.

The point cloud example above comes from the Brekel Pro Point Cloud v2 tool, which allows you to read, record and modify point clouds with your Kinect.

The tool also includes a Unity package that replays point clouds, like the one above, in a Unity for Windows app. The final steps of transferring point cloud data over the HoloToolkit sharing server to HoloLens is an exercise that will be left to the reader.

If you are interested in a custom server solution, however, you can give the open source LiveScan 3D – HoloLens project a try.

HoloLens shared experiences and beyond

There are actually a lot of ways to orchestrate communication for the HoloLens of which, so far, we’ve mainly discussed just one. A custom socket solution may be better if you want to institute direct HoloLens-to-HoloLens communication without having to go through a PC-based broker like the sharing service.

Yet another option is to use a framework like WebRTC for your communication layer. This has the advantage of being an open specification, so there are implementations for a wide variety of platforms such as Android and iOS. It is also a communication platform that is used, in particular, for video chat applications, potentially giving you a way to create video conferencing apps not only between multiple HoloLenses, but also between a HoloLens and mobile devices.

In other words, all the tools for doing HoloLens telepresence are out there, including examples of various ways to implement it. It’s now just a matter of waiting for someone to create a great solution.

Building the Terminator Vision HUD in HoloLens

James Cameron’s 1984 film The Terminator introduced many science-fiction idioms we now take for granted. One of the most persistent is the thermal head-up-display (HUD) shot that allows the audience to see the world through the eyes of Arnold Schwarzenegger’s T-800 character. In design circles, it is one of the classic user interfaces that fans frequently try to recreate both as a learning tool and as a challenge.

In today’s post, you’ll learn how to recreate this iconic interface for the HoloLens. To sweeten the task, you’ll also hook up this interface to Microsoft Cognitive Services to perform an analysis of objects in the room, face detection and even some Optical Character Recognition (OCR).

While on the surface this exercise is intended to just be fun, there is a deeper level. Today, most computing is done in 2D. We sit fixed at our desks and stare at rectangular screens. All of our input devices, our furniture and even our office spaces are designed to help us work around 2D computing. All of this will change over the next decade.

Modern computing will eventually be overtaken by both 3D interfaces and 1-dimensional interfaces. 3D interfaces are the next generation of mixed reality devices that we are all so excited about. 1D interfaces, driven by advances in AI research, are overtaking our standard forms of computing more quietly, but just as certainly.

By speaking or looking in a certain direction, we provide inputs to AI systems in the cloud that can quickly analyze our world and provide useful information. When 1D and 3D are combined—as you are going to do in this walkthrough—a profoundly new type of experience is created that may one day lead to virtual personal assistants that will help us to navigate our world and our lives.

The first step happens to be figuring out how to recreate the T-800 thermal HUD display.

Recreating the UI

Start by creating a new 3D project in Unity and call it “Terminator Vision.” Create a new scene called “main.” Add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene. In the menu for your Unity IDE, click on HoloToolkit -> Configure to set up your project to target HoloLens.

Once your project and your scene are properly configured, the first thing to add is a Canvas object to the scene to use as a surface to write on. In the hierarchy window, right-click on your “main” scene and select GameObject -> UI -> Canvas from the context menu to add it. Name your Canvas “HUD.”

The HUD also needs some text, so the next step is to add a few text regions to the HUD. In the hierarchy view, right-click on your HUD and add four Text objects by selecting UI -> Text. Call them BottomCenterText, MiddleRightText, MiddleLeftText and MiddleCenterText. Add some text to help you match the UI to the UI from the Terminator movie. For the MiddleRightText add:












LEVEL 2347923 MAX

For the MiddleLeftText object, add:



234654 453 38

654334 450 16

245261 856 26

453665 766 46

382856 863 09

356878 544 04

664217 985 89

For the BottomCenterText, just write “MATCH.” In the scene panel, adjust these Text objects around your HUD until they match with screenshots from the Terminator movie. MiddleCenterText can be left blank for now. You’re going to use it later for surfacing debug messages.

Getting the fonts and colors right are also important – and there are lots of online discussions around identifying exactly what these are. Most of the text in the HUD is probably Helvetica. By default, Unity in Windows assigns Arial, which is close enough. Set the font color to an off-white (236, 236, 236, 255), font-style to bold, and the font size to 20.

The font used for the “MATCH” caption at the bottom of the HUD is apparently known as Heinlein. It was also used for the movie titles. Since this font isn’t easy to find, you can use another font created to emulate the Heinlein font called Modern Vision, which you can find by searching for it on internet. To use this font in your project, create a new folder called Fonts under your Assets folder. Download the custom font you want to use and drag the TTF file into your Fonts folder. Once this is done, you can simply drag your custom font into the Font field of BottomCenterText or click on the target symbol next to the value field for the font to bring up a selection window. Also, increase the font size for “MATCH” to 32 since the text is a bit bigger than other text in the HUD.

In the screenshots, the word “MATCH” has a white square placed to its right. To emulate this square, create a new InputField (UI -> Input Field) under the HUD object and name it “Square.” Remove the default text, resize it and position it until it matches the screenshots.

Locking the HUD into place

By default, the Canvas will be locked to your world space. You want it to be locked to the screen, however, as it is in the Terminator movies.

To configure a camera-locked view, select the Canvas and examine its properties in the Inspector window. Go to the Render Mode field of your HUD Canvas and select Screen Space – Camera in the drop down menu. Next, drag the Main Camera from your hierarchy view into the Render Camera field of the Canvas. This tells the canvas which camera perspective it is locked to.

The Plane Distance for your HUD is initially set to one meter. This is how far away the HUD will be from your face in the Terminator Vision mixed reality app. Because HoloLens is stereoscopic, adjusting the view for each eye, this is actually a bit close for comfort. The current focal distance for HoloLens is two meters, so we should set the plane distance at least that far away.

For convenience, set Plane Distance to 100. All of the content associated with your HUD object will automatically scale so it fills up the same amount of your visual field.

It should be noted that locking visual content to the camera, known as head-locking, is generally discouraged in mixed reality design as it can cause visual comfort. Instead, using body-locked content that tags along with the player is the recommended way to create mixed reality HUDs and menus. For the sake of verisimilitude, however, you’re going to break that rule this time.

La vie en rose

Terminator view is supposed to use heat vision. It places a red hue on everything in the scene. In order to create this effect, you are going to play a bit with shaders.

A shader is a highly optimized algorithm that you apply to an image to change it. If you’ve ever worked with any sort of photo-imaging software, then you are already familiar with shader effects like blurring. To create the heat vision colorization effect, you would configure a shader that adds a transparent red distortion to your scene.

If this were a virtual reality experience, in which the world is occluded, you would apply your shader to the camera using the RenderWithShader method. This method takes a shader and applies it to any game object you look at. In a holographic experience, however, this wouldn’t work since you also want to apply the distortion to real-life objects.

In the Unity toolbar, select Assets -> Create -> Material to make a new material object. In the Shader field, click on the drop-down menu and find HoloToolkit -> Lambertian Configurable Transparent. The shaders that come with the HoloToolkit are typically much more performant in HoloLens apps and should be preferred. The Lambertian Configurable Transparent shader will let you select a red to apply; (200, 43, 38) seems to work well, but you should choose the color values that look good to you.

Add a new plane (3D Object -> Plane) to your HUD object and call it “Thermal.” Then drag your new material with the configured Lambertian shader onto the Thermal plane. Set the Rotation of your plane to 270 and set the Scale to 100, 1, 100 so it fills up the view.

Finally, because you don’t want the red colorization to affect your text, set the Z position of each of your Text objects to -10. This will pull the text out in front of your HUD a little so it stands out from the heat vision effect.

Deploy your project to a device or the emulator to see how your Terminator Vision is looking.

Making the text dynamic

To hook up the HUD to Cognitive Services, first orchestrate a way to make the text dynamic. Select your HUD object. Then, in the Inspector window, click on Add Component -> New Script and name your script “Hud.”

Double-click Hud.cs to edit your script in Visual Studio. At the top of your script, create four public fields that will hold references to the Text objects in your project. Save your changes.

public Text InfoPanel;
    public Text AnalysisPanel;
    public Text ThreatAssessmentPanel;
    public Text DiagnosticPanel;

If you look at the Hud component in the Inspector, you should now see four new fields that you can set. Drag the HUD Text objects into these fields, like so.

In the Start method, add some default text so you know the dynamic text is working.

  void Start()
        AnalysisPanel.text = "ANALYSIS:n**************ntestntestntest";
        ThreatAssessmentPanel.text = "SCAN MODE XXXXXnINITIALIZE";
        InfoPanel.text = "CONNECTING";

When you deploy and run the Terminator Vision app, the default text should be overwritten with the new text you assign in Start. Now set up a System.Threading.Timer to determine how often you will scan the room for analysis. The Timer class measures time in milliseconds. The first parameter you pass to it is a callback method. In the code shown below, you will call the Tick method every 30 seconds. The Tick method, in turn, will call a new method named AnalyzeScene, which will be responsible for taking a photo of whatever the Terminator sees in front of him using the built-in color camera, known as the locatable camera, and sending it to Cognitive Services for further analysis.

    System.Threading.Timer _timer;
    void Start()

        int secondsInterval = 30;
        _timer = new System.Threading.Timer(Tick, null, 0, secondsInterval * 1000);


    private void Tick(object state)

Unity accesses the locatable camera in the same way it would normally access any webcam. This involves a series of calls to create the photo capture instance, configure it, take a picture and save it to the device. Along the way, you can also add Terminator-style messages to send to the HUD in order to indicate progress.

    void AnalyzeScene()
        InfoPanel.text = "CALCULATION PENDING";
        PhotoCapture.CreateAsync(false, OnPhotoCaptureCreated);

    PhotoCapture _photoCaptureObject = null;
    void OnPhotoCaptureCreated(PhotoCapture captureObject)
        _photoCaptureObject = captureObject;

        Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();

        CameraParameters c = new CameraParameters();
        c.hologramOpacity = 0.0f;
        c.cameraResolutionWidth = cameraResolution.width;
        c.cameraResolutionHeight = cameraResolution.height;
        c.pixelFormat = CapturePixelFormat.BGRA32;

        captureObject.StartPhotoModeAsync(c, OnPhotoModeStarted);

    private void OnPhotoModeStarted(PhotoCapture.PhotoCaptureResult result)
        if (result.success)
            string filename = string.Format(@"terminator_analysis.jpg");
            string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename);
            _photoCaptureObject.TakePhotoAsync(filePath, PhotoCaptureFileOutputFormat.JPG, OnCapturedPhotoToDisk);
            DiagnosticPanel.text = "DIAGNOSTICn**************nnUnable to start photo mode.";
            InfoPanel.text = "ABORT";

If the photo is successfully taken and saved, you will grab it, serialize it as an array of bytes and send it to Cognitive Services to retrieve an array of tags that describe the room as well. Finally, you will dispose of the photo capture object.

    void OnCapturedPhotoToDisk(PhotoCapture.PhotoCaptureResult result)
        if (result.success)
            string filename = string.Format(@"terminator_analysis.jpg");
            string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename);

            byte[] image = File.ReadAllBytes(filePath);
            DiagnosticPanel.text = "DIAGNOSTICn**************nnFailed to save Photo to disk.";
            InfoPanel.text = "ABORT";

    void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
        _photoCaptureObject = null;

In order to make a REST call, you will need to use the Unity WWW object. You also need to wrap the call in a Unity coroutine in order to make the call non-blocking. You can also get a free Subscription Key to use the Microsoft Cognitive Services APIs just by signing up.

    string _subscriptionKey = "b1e514eYourKeyGoesHere718c5";
    string _computerVisionEndpoint = "https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Tags,Faces";
    public void GetTagsAndFaces(byte[] image)
            coroutine = RunComputerVision(image);

    IEnumerator RunComputerVision(byte[] image)
        var headers = new Dictionary<string, string>() {
            { "Ocp-Apim-Subscription-Key", _subscriptionKey },
            { "Content-Type", "application/octet-stream" }

        WWW www = new WWW(_computerVisionEndpoint, image, headers);
        yield return www;

        List<string> tags = new List<string>();
        var jsonResults = www.text;
        var myObject = JsonUtility.FromJson<AnalysisResult>(jsonResults);
        foreach (var tag in myObject.tags)
        AnalysisPanel.text = "ANALYSIS:n***************nn" + string.Join("n", tags.ToArray());

        List<string> faces = new List<string>();
        foreach (var face in myObject.faces)
            faces.Add(string.Format("{0} scanned: age {1}.", face.gender, face.age));
        if (faces.Count > 0)
            InfoPanel.text = "MATCH";
            InfoPanel.text = "ACTIVE SPATIAL MAPPING";
        ThreatAssessmentPanel.text = "SCAN MODE 43984nTHREAT ASSESSMENTnn" + string.Join("n", faces.ToArray());

The Computer Vision tagging feature is a way to detect objects in a photo. It can also be used in an application like this one to do on-the-fly object recognition.

When the JSON data is returned from the call to cognitive services, you can use the JsonUtility to deserialize the data into an object called AnalysisResult, shown below.

    public class AnalysisResult
        public Tag[] tags;
        public Face[] faces;


    public class Tag
        public double confidence;
        public string hint;
        public string name;

    public class Face
        public int age;
        public FaceRectangle facerectangle;
        public string gender;

    public class FaceRectangle
        public int height;
        public int left;
        public int top;
        public int width;

One thing to be aware of when you use JsonUtility is that it only works with fields and not with properties. If your object classes have getters and setters, JsonUtility won’t know what to do with them.

When you run the app now, it should update the HUD every 30 seconds with information about your room.

To make the app even more functional, you can add OCR capabilities.

string _ocrEndpoint = "https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr";
public void ReadWords(byte[] image)
    coroutine = Read(image);

IEnumerator Read(byte[] image)
var headers = new Dictionary<string, string>() {
    { "Ocp-Apim-Subscription-Key", _subscriptionKey },
    { "Content-Type", "application/octet-stream" }

WWW www = new WWW(_ocrEndpoint, image, headers);
yield return www;

List<string> words = new List<string>();
var jsonResults = www.text;
var myObject = JsonUtility.FromJson<OcrResults>(jsonResults);
foreach (var region in myObject.regions)
foreach (var line in region.lines)
foreach (var word in line.words)

string textToRead = string.Join(" ", words.ToArray());
if (myObject.language != "unk")
    DiagnosticPanel.text = "(language=" + myObject.language + ")n" + textToRead;

This service will pick up any words it finds and redisplay them for the Terminator.

It will also attempt to determine the original language of any words that it finds, which in turn can be used for further analysis.


In this post, you discovered how to recreate a cool visual effect from an iconic sci-fi movie. You also found out how to call Microsoft Cognitive Services from Unity in order to make a richer recreation.

You can extend the capabilities of the Terminator Vision app even further by taking the text you find through OCR and calling Cognitive Services to translate it into another language using the Translator API. You could then use the Bing Speech API to read the text back to you in both the original language and the translated language. This, however, goes beyond the original goal of recreating the Terminator Vision scenario from the 1984 James Cameron film and starts sliding into the world of personal assistants, which is another topic for another time.

View the source code for Terminator Vision on Github here.

Announcing the Xbox Live Creators Program

Today at GDC we announced the launch of Xbox Live Creators Program, starting with an Insider Preview that gives any developer the opportunity to publish Xbox Live-enabled games on Windows 10 PCs along with Xbox One consoles.

The Creators Program provides game developers access to Xbox Live sign-in, presence and select social features that can all be integrated with their UWP games, and then they can publish their game to Xbox One and Windows 10. This means your title can be seen by every Xbox One owner across the Xbox One family of devices, including Project Scorpio this holiday, as well as hundreds of millions of Windows 10 PCs.

What do you get with the Xbox Live Creators Program?

First, we are opening publishing to the Xbox One console. With the Xbox Live Creators Program, you can ship your UWP game on Xbox One, Windows 10 PC, or simultaneously on both platforms. And because Xbox One offers players a curated store experience, games from the Creators Program will appear in a new, distinct Creators game section in the Store.

Second, we’re making it easy to integrate with Xbox Live using the Xbox Live Creators SDK.  Take advantage of the following capabilities:

  • Xbox Live sign-in and profile, including gamertag.
  • Xbox Live presence, recently played and activity feed.
  • Xbox Live social, including friends, Game Hubs, clubs, party chat, gameDVR and Beam broadcast.
  • Xbox Live leaderboards and feature stats.
  • Title Storage and Connected Storage.

Any developer who wants to take advantage of more Xbox Live capabilities and development and marketing support for their game should apply and enroll into the [email protected] program.

What tools can I develop with?

The Creators Program enables you to easily integrate Xbox Live into your existing UWP projects.  Supported game engines include Construct 2, MonoGame, Unity and Xenko, and they all create beautiful games. Others may also work. And, you can develop games for the console without a Dev Kit.

How do I get started?

  1. Join the Developer Preview at https://developer.microsoft.com/games/xbox/xboxlive/creator. This will give you access to Creator’s Program configuration pages is Dev Center.
  2. Download and start using the Xbox Live Creators SDK.

While the Xbox Live Creators Program is in limited release to insiders, you can integrate and configure services using the SDK and the Dev Center. However, you will not be able to publish to the Store. We’ll be enabling publishing in the near future, so stay tuned!

To learn more, browse sample code and ask questions, check out the following documentation and communities:

We’d love to hear your feedback!  Use the Xbox Live Creator’s Program UserVoice site to voice your suggestions.

Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens

The platform game genre has undergone constant evolution, from its earliest incarnations in Donkey Kong and Pitfall to recent variations like Flappy Bird. Shigeru Miyamoto’s Super Mario Bros. is recognized as the best platform game of all time, setting a high bar for everyone who came after. The Lara Croft series built on Shigeru’s innovations by taking the standard side-scrolling platformer and expanding it into a 3D world. With mixed reality and HoloLens, we all have the opportunity to expand the world of the platform game yet again.

Standard video game conventions undergo a profound change when you put a platformer in a mixed reality environment. First of all, instead of sitting in a chair and moving your character inside your display screen, you physically follow your character as he moves around the real world. Second, the obstacles your protagonist encounters aren’t just digital ones but also physical objects in the real world, like tables and chairs and stacks of books. Third, because every room you play in effectively becomes a new level, the mixed reality platform game never runs out of levels and every level presents unique challenges. Instead of comparing scores for a certain game stage, you will need to compare how well you did in the living room—or in Jane’s kitchen or in Shigeru’s basement.

In this post, you will learn how to get started building a platform game for HoloLens using all free assets. In doing so, you will learn the basics of using Spatial Mapping to scan a room so your player character can interact with it. You will also use the slightly more advanced features of Spatial Understanding to determine characteristics of the game environment. Finally, all of this will be done in the Unity IDE (currently 5.5.0f3) with the open source HoloToolkit.

Creating your game world with Spatial Mapping

How does HoloLens make it possible for virtual objects and physical objects to interact?  The HoloLens is equipped with a depth camera, similar to the Kinect v2’s depth camera, that progressively scans a room in order to create a spatial map through a technique known as spatial mapping. It uses this data about the real world to create 3D surfaces in the virtual world. Then, using its four environment-aware cameras, it positions and orients the 3D reconstruction of the room in correct relation to the player. This map is often visualized at the start of HoloLens applications as a web of lines blanketing the room the player is in. You can also sometimes trigger this visualization by simply tapping in the air in front of you while wearing the HoloLens.

To play with spatial mapping, create a new 3D project in Unity. You can call the project “3D Platform Game.” Create a new scene for this game called “main.”

Next, add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene.

The HoloToolkit provides lots of useful helpers and shortcuts for developing a HoloLens app. Under the HoloToolkit menu, there is a Configure option that lets you correctly rig your game for HoloLens. After being sure to save your scene and project, click on each of these options to configure your scene, your project and your capability settings. Under capabilities, you must make sure to check off SpatialPerception—otherwise spatial mapping will not work. Also, be sure to save your project after each change. If for some reason you would prefer to do this step manually, there is documentation available to walk you through it.

To add spatial mapping functionality to your game, all you need to do is drag the SpatialMapping prefab into your scene from HoloToolkit -> SpatialMapping -> Prefabs. If you build and deploy the game to your HoloLens or HoloLens Emulator now, you will be able to see the web mesh of surface reconstruction occurring.

Congratulations! You’ve created your first level.

Adding a protagonist and an Xbox Controller

The next step is to create your protagonist. If you are lucky enough to have a Mario or a Luigi rigged model, you should definitely use that. In keeping with the earlier promise to use only free assets, however, this guide will use the complimentary Ethan asset.

Go to the Unity menu and select Assets -> Import Package -> Characters. Copy the whole package into your game by clicking Import. Finally, drag the ThirdPersonController prefab from Assets -> Standard Assets -> Characters -> ThirdPersonCharacter -> Prefabs into your scene.

Next, you’ll want a Bluetooth controller to steer your character. Newer Xbox One controllers support Bluetooth. To get one to work with HoloLens, you’ll need to closely follow these directions in order to update the firmware on your controller. Then pair the controller to your HoloLens through the Settings -> Devices menu.

To support the Xbox One controller in your game, you should add another free asset. Open the Asset Store by clicking on Window -> Asset Store and search for Xbox Controller Input for HoloLens. Import this package into your project.

You can this up to your character with a bit of custom script. In your scene, select the ThirdPersonController prefab. Find the Third Person User Control script in the Inspector window and delete it. You’re going to write your own custom Control that depends on the Xbox Controller package you just imported.

In the Inspector window again, go to the bottom and click on Add Component -> New Script. Name your script ThirdPersonHoloLensControl and copy/paste the following code into it:

using UnityEngine;
using HoloLensXboxController;
using UnityStandardAssets.Characters.ThirdPerson;

public class ThirdPersonHoloLensControl : MonoBehaviour

    private ControllerInput controllerInput;
    private ThirdPersonCharacter m_Character;
    private Transform m_Cam;                
    private Vector3 m_CamForward;            
    private Vector3 m_Move;
    private bool m_Jump;                      

    public float RotateAroundYSpeed = 2.0f;
    public float RotateAroundXSpeed = 2.0f;
    public float RotateAroundZSpeed = 2.0f;

    public float MoveHorizontalSpeed = 1f;
    public float MoveVerticalSpeed = 1f;

    public float ScaleSpeed = 1f;

    void Start()
        controllerInput = new ControllerInput(0, 0.19f);
        // get the transform of the main camera
        if (Camera.main != null)
            m_Cam = Camera.main.transform;

        m_Character = GetComponent<ThirdPersonCharacter>();

    // Update is called once per frame
    void Update()
        if (!m_Jump)
            m_Jump = controllerInput.GetButton(ControllerButton.A);

    private void FixedUpdate()
        // read inputs
        float h = MoveHorizontalSpeed * controllerInput.GetAxisLeftThumbstickX();
        float v = MoveVerticalSpeed * controllerInput.GetAxisLeftThumbstickY();
        bool crouch = controllerInput.GetButton(ControllerButton.B);

        // calculate move direction to pass to character
        if (m_Cam != null)
            // calculate camera relative direction to move:
            m_CamForward = Vector3.Scale(m_Cam.forward, new Vector3(1, 0, 1)).normalized;
            m_Move = v * m_CamForward + h * m_Cam.right;

        // pass all parameters to the character control script
        m_Character.Move(m_Move, crouch, m_Jump);
        m_Jump = false;

This code is a variation on the standard controller code. Now that it is attached, it will let you use a Bluetooth enabled Xbox One controller to move your character. Use the A button to jump. Use the B button to crouch.

You now have a first level and a player character you can move with a controller: pretty much all the necessary components for a platform game. If you deploy the project as is, however, you will find that there is a small problem. Your character falls through the floor.

This happens because, while the character appears as soon as the scene starts, it actually takes a bit of time to scan the room and create meshes for the floor. If the character shows up before those meshes are placed in the scene, he will simply fall through the floor and keep falling indefinitely because there are no meshes to catch him.

How ‘bout some spatial understanding

In order to avoid this, the app needs a bit of spatial smarts. It needs to wait until the spatial meshes are mostly completed before adding the character to the scene. It should also scan the room and find the floor so the character can be added gently rather than dropped into the room. The spatial understand prefab will help you to accomplish both of these requirements.

Add the Spatial Understanding prefab to your scene. It can be found in Assets -> HoloToolkit -> SpatialUnderstanding -> Prefabs.

Because the SpatialUnderstanding game object also draws a wireframe during scanning, you should disable the visual mesh used by the SpatialMapping game object by deselecting Draw Visual Mesh in its Spatial Mapping Manager script. To do this, select the SpatialMapping game object, find the Spatial Mapping Manager in the Inspector window and uncheck Draw Visual Mesh.

You now need to add some orchestration to the game to prevent the third person character from being added too soon. Select ThirdPersonController in your scene. Then go to the Inspector panel and click on Add Component -> New Script. Call your script OrchestrateGame. While this script could really be placed anywhere, attaching it to the ThirdPersonController will make it easier to manipulate your character’s properties.

Start by adding HideCharacter and ShowCharacter methods to the OrchestrateGame class. This allows you to make the character invisible until you are ready to add him to the game level (the room).

    private void ShowCharacter(Vector3 placement)
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = true;
        m_Character.transform.position = placement;
        var rigidBody = GetComponent<Rigidbody>();
        rigidBody.angularVelocity = Vector3.zero;
        rigidBody.velocity = Vector3.zero;        

    private void HideCharacter()
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = false;

When the game starts, you will initially hide the character from view. More importantly, you will hook into the SpatialUnderstanding singleton and handle it’s ScanStateChanged event. Once the scan is done, you will use spatial understanding to correctly place the character.

    private ThirdPersonCharacter m_Character;

    void Start()
        m_Character = GetComponent<ThirdPersonCharacter>();
        SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged;
    private void Instance_ScanStateChanged()
        if ((SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done) &&

How do you decide when the scan is completed? You could set up a timer and wait for a predetermined length of time to pass. But this might provide inconsistent results. A better way is to take advantage of the spatial understanding functionality in the HoloToolkit.

Spatial understanding is constantly evaluating surfaces picked up by the spatial mapping component. You will set a threshold to decide when you have retrieved enough spatial information. Every time the Update method is called, you will evaluate whether the threshold has been met, as determined by the spatial understanding module. If it is, you call the RequestFinishScan method on SpatialUnderstanding to get it to finish scanning and set its ScanState to Done.

private bool m_isInitialized;
    public float kMinAreaForComplete = 50.0f;
    public float kMinHorizAreaForComplete = 25.0f;
    public float kMinWallAreaForComplete = 10.0f;
    // Update is called once per frame
    void Update()
        // check if enough of the room is scanned
        if (!m_isInitialized && DoesScanMeetMinBarForCompletion)
            // let service know we're done scanning
            m_isInitialized = true;

    public bool DoesScanMeetMinBarForCompletion
            // Only allow this when we are actually scanning
            if ((SpatialUnderstanding.Instance.ScanState != SpatialUnderstanding.ScanStates.Scanning) ||
                return false;

            // Query the current playspace stats
            IntPtr statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr();
            if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0)
                return false;
            SpatialUnderstandingDll.Imports.PlayspaceStats stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats();

            // Check our preset requirements
            if ((stats.TotalSurfaceArea > kMinAreaForComplete) ||
                (stats.HorizSurfaceArea > kMinHorizAreaForComplete) ||
                (stats.WallSurfaceArea > kMinWallAreaForComplete))
                return true;
            return false;

Once spatial understanding has determined that enough of the room has been scanned to start the level, you can use spatial understanding one more time to determine where to place your protagonist. First, the PlaceCharacterInGame method, show below, tries to determine the Y coordinate of the room floor. Next, the main camera object is used to determine the direction the HoloLens is facing in order to find a coordinate position two meters in front of the HoloLens. This position is combined with the Y coordinate of the floor in order to place the character gently on the ground in front of the player.

private void PlaceCharacterInGame()
// use spatial understanding to find floor
SpatialUnderstandingDll.Imports.PlayspaceAlignment alignment = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignment();

// find 2 meters in front of camera position
var inFrontOfCamera = Camera.main.transform.position + Camera.main.transform.forward * 2.0f;

// place character on floor 2 meters ahead
ShowCharacter(new Vector3(inFrontOfCamera.x, alignment.FloorYValue, 2.69f));

// hide mesh
var customMesh = SpatialUnderstanding.Instance.GetComponent<SpatialUnderstandingCustomMesh>();
customMesh.DrawProcessedMesh = false;

You complete the PlaceCharacterInGame method by making the meshes invisible to the player. This reinforces the illusion that your protagonist is running into and jumping over objects in the real world. The last thing needed to finish this game, level design, is something that is unfortunately too complex to cover in this platform.

Because this platform game has been developed in mixed reality, you have an interesting choice to make, however, as you design your level. You can do level design the traditional way using 3D models. Alternatively, you can also do it using real world objects which the character must run between and jump over. Finally, the best approach may involve even mixing the two.


To paraphrase Shakespeare, all the world’s a stage and every room in it is a level. Mixed reality has the power to create new worlds for us—but it also has the power to make us look at the cultural artifacts and conventions we already have, like the traditional platform game, in entirely new ways. Where virtual reality is largely about escapism, the secret of mixed reality may simply be that it makes us appreciate what we already have by giving us fresh eyes with which to look at them.

Monetize your game app with Playtem ads

Here’s something any would-be game developer who wants to find a good monetization strategy needs to know. The psychology of gaming is weird. I have a friend who hates to pay for games. Her favorite kind of games are the freemium ones that make their money off of in-app purchases. Basically, you can play the whole game without ever paying a dime, but you can advance much more quickly if you are willing to spend money for better in-app gear or extra lives.

My friend makes a special point of persevering past really difficult boss levels with low-end gear or putting in extra hours to gain experience points when she could have bought an experience multiplier for just a few dollars. In her mind, she’s beating not only the game itself but also, at a meta-level, the economic rules underpinning how games are distributed.

Like I said, the psychology of gaming is weird. Game and app developers learned long ago that people don’t like to spend money on digital content, even when it’s only a couple of bucks. A price tag can even be a drag on download numbers. So instead of charging for games, developers discovered they could make more money by monetizing games through ads or in-app purchases. Someone who doesn’t want to pay $2 for a game might all-the-same be willing to pay $20 or more for in-app content once they have invested several hours in gameplay. But for people like my friend, that model doesn’t work.

Which is why Playtem’s monetization strategy is really interesting. The platform, currently available for Unity games deployed to UWP, iOS and Android, combines both ads and in-app purchases.

In-app purchases are successful because they provide a low barrier to entry for the user, and typically generate additional revenue when the player is enjoying the game the most. Native Ads are successful because they underwrite a player’s fun and generate click-throughs for the advertiser. Playtem combines the two models at key satisfaction moments in a game, like the completion of a level, by allowing an advertiser to reward players with free in-app content.

The visual style of the ad-funded in-app content is integrated into the natural style of the game to provide a continuous experience and also a sense that the ads are an organic outgrowth of the game. Because the ads only appear at intermittent gameplay moments when the player is most attentive, advertisers get the maximum benefit from these ads. Game developers, in turn, avoid negative comments about overly intrusive ads.

To make the ad experience even smoother, the player is not required to do anything to receive the free content. There will be an advertiser link in the ad itself, but the player gets the reward whether he clicks on it or not.

Configuration and code for Playtem ads

To set up your Unity game to use Playtem ads, you just need to do the following steps…

  • Add the Unity package to your game.
  • Contact Playtem to get an API key and provide samples of your game’s visual style.
  • Configure your app to support the Playtem platform.
  • Add the appropriate code.

You want to make sure that your app is configured to use the .NET scripting backend and not IL2CPP (.NET is the default).

You also need to configure your app capabilities, in the Publishing Settings tab in Unity, to include InternetClient, InternetClientServer and PrivateNetworkClientServer.

The Playtem API has only four events that need to be handled. At appropriate moments in your game, you will create an instance of the PlaytemNetwork class, set up your event handlers, and then call the TryToLoadAd method.

// initialize object
PlaytemNetwork playtemNetwork = new PlaytemNetwork(_apiKeyString, _userIdString);

// set up event handlers
            playtemNetwork.OnAdLoaded = delegate ()
	         // if we successfully downloaded an ad, show it		
            playtemNetwork.OnAdLoadingFailed = delegate (string message)
                // handle failure to load
            playtemNetwork.OnRewarded = delegate (string message)
                // additional code for in-app purchase reward
            playtemNetwork.OnAdClosed = delegate ()
                // continue game after ad closed

// start grabbing an ad from the Playtem network

Casual gaming on devices is constantly changing, and developers are always trying to find the best model for making a good income off of their games. Playtem’s model offers a unique way to do this that improves the app experience for the gamer as well as the game developer. Learn more by checking out Playtem’s demo and their documentation for developers.

UWP Experiences – App Samples

The UWP App Experiences are beautiful, cross-device, feature-rich and functional app samples built to demonstrate realistic app scenarios on the UWP platform across PC, Tablet, Xbox and more. Besides being open source on GitHub, each sample is accompanied by at least one blog post and short overview video, and will be published on the Windows Store in the upcoming month to provide easier access for developers.

The News Experience

( source | blog post | video )

Fourth Coffee is a news app that works across desktop, phone, and Xbox One, and offers a premium experience that takes advantage of each device’s strengths including tailored UI for each input modality such as controller on Xbox, touch on tablet and mouse on Desktop.

The Weather Experience

( source | blog post | video )

Atmosphere is a weather app that showcases the use of the popular Unity Engine to build beautiful UWP apps. In addition, the app implements UWP app extensions to enable other developers to extend certain areas of the app, as well as exposes an app service that enables other apps to use that weather information, as illustrated by Fourth Coffee.

The Music Experience

( source | blog post | video )

Backdrop is a cross-platform music app sharing code between UWP and other platforms using Xamarin. It supports background audio on UWP devices and cross-platform device collaboration using SignalR.

The Video Experience

( source | blog post | video )

South Ridge Video is a hosted web application built with React.js and hosted on a web server. The app can easily be converted to a UWP application that takes advantage of native platform capabilities, and can be distributed through the Windows Store as with any other UWP app.

The IoT Experience

( source | blog post | video )

Best For You is a fitness UWP app focused on collecting data from an IoT device using Windows IoT Core, Azure IoT Hub, Azure Event Hub and Azure Stream Analytics for processing.

The Social Experience

( source )

Adventure Works is a cross-device UWP application for sharing adventures and experiences with fictional friends. It is separated into three parts:

About the samples

These samples have been built and designed for multiple UWP devices and scenarios in mind from the start and are meant to showcase end to end solutions. Any developer can take advantage of these samples regardless of the device type or features they are targeting, and we are looking forward to hearing about your experience on the official GitHub repository.

Happy coding!

GameAnalytics SDK for Microsoft UWP Released

We’re excited to announce our partnership with GameAnalytics, a powerful tool that helps developers understand player behavior so they can improve engagement, reduce churn and increase monetization.

The tool gives game developers a central platform that consolidates player data from various channels to help visualize their core gaming KPIs in one convenient view. It also enables team members to collaborate with reporting and benchmark their game to see how it compares with more than 10,000 similar titles.

You can set up GameAnalytics in a few minutes and it’s totally free of charge, without any caps on usage or premium subscription tiers. If you’d rather see the platform in action before making any technical changes, just sign up to view the demo game and data.

GameAnalytics is used by more than 30,000 game developers worldwide and handles over five billion unique events every day across 1.7 billion devices.

“I believe the single most valuable asset for any game developer in today’s market is knowledge,” said GameAnalytics Founder and Chairman, Morten E Wulff. “Since I started GameAnalytics back in 2012, I’ve met with hundreds of game studios from all over the world, and every single one is struggling with increasing user acquisition costs and falling retention rates.”

“When they do strike gold, they don’t always know why. GameAnalytics is here to change that. To be successful, game studios will have to combine creative excellence with a data-driven approach to development and monetization. We are here to bridge this gap and make it available to everyone for free,” he added.

GameAnalytics provides SDKs for every major game engine. The following guide will outline how to install the SDK and setup GameAnalytics to start tracking player behavior in four steps.

1.  Create a free GameAnalytics account

To get started, sign up for a free GameAnalytics account and add your first game. When you’ve created your game, you’ll find the integration keys in the settings menu (the gear icon), under “Game information.” You’ll need to copy your Game Key and Secret Key for the following steps.

2.  Download the standalone SDK for Microsoft UWP

Next, download the GameAnalytics SDK for Microsoft UWP. Once downloaded, you can begin the installation process.

3.  Install the native UWP SDK

To install the GameAnalytics SDK for Microsoft UWP, simply install using the Nuget by adding the GameAnalytics.UWP.SDK package from Nuget package manager. For Manual installation, use the following instructions:

Manual installation

  • Open GA-SDK-UWP.sln and compile the GA_SDK_UWP project
  • Create a Nuget package: nuget pack GA_SDK_UWP/GA_SDK_UWP.nuspec
  • Copy the resulting GameAnalytics.UWP.SDK.[VERSION].nupkg (where [VERSION] is the version specified in the .nuspec file) into for example C:Nuget.Local (the name and location of the folder is up to you)
  • Add C:Nuget.Local (or whatever you called the folder) to the Nuget package sources (and disable Official Nuget source)
  • Add GameAnalytics.UWP.SDK package from Nuget packet manager

4.  Initialize the integration

Call this method to initialize using the Game Key and Secret Key for your game (copied in step 1):

// Initialize
GameAnalytics.Initialize("[game key]", "[secret key]");

Below is a practical example of code that is called at the beginning of the game to initialize GameAnalytics:

using GameAnalyticsSDK.Net;

namespace MyGame
    public class MyGameClass
        // ... other code from your project ...
        void OnStart()

            GameAnalytics.ConfigureAvailableResourceCurrencies("gems", "gold");
            GameAnalytics.ConfigureAvailableResourceItemTypes("boost", "lives");
            GameAnalytics.ConfigureAvailableCustomDimensions01("ninja", "samurai");
            GameAnalytics.ConfigureAvailableCustomDimensions02("whale", "dolpin");
            GameAnalytics.ConfigureAvailableCustomDimensions03("horde", "alliance");
            GameAnalytics.Initialize("[game key]", "[secret key]");

5.  Build to your game engine

GameAnalytics has provided full documentation for each game engine and platform. You can view and download all files via their Github page, or follow the steps below. They currently support building to the following game engines with Microsoft UWP:

You can also connect to the service using their Rest API.

Viewing your game data

Once implemented, GameAnalytics provides insight into more than 50 of the top gaming KPIs, straight out of the box. Many of these metrics are viewable on a real-time dashboard to get a quick overview into the health of your game throughout the day.

The real-time dashboard gives you visual insight into your number of concurrent users, incoming events, new users, returning users, transactions, total revenue, first time revenue and error logs.

Creating custom events

You can create your own custom events with unique IDs, which allow you to track actions specific to your game experience and measure these findings within the GameAnalytics interface. Event IDs are fully customizable and should fall within one of the following event types:

Event Description
Business In-App Purchases supporting receipt validation on GA servers.
Resource Managing the flow of virtual currencies – like gems or lives.
Progression Level attempts with Start, Fail & Complete event.
Error Submit exception stack traces or custom error messages.
Design Submit custom event IDs. Useful for tracking metrics specifically needed for your game.

For more information about planning and implementing each of these event types to suit your game, visit the game analytics data and events page.

GameAnalytics Dashboards

Developers using GameAnalytics can track their events in a selection of dashboards tailored specifically to games. The dashboards are powerful, yet totally flexible to suit any use case.

Overview Dashboard

With this dashboard you will see a quick snapshot of your core game KPIs.

Acquisition Dashboard

This dashboard provides insight into your player acquisition costs and best marketing sources.


This dashboard helps to measure how engaged your players are over time.


This dashboard visualizes all of the monetization metrics relating to your game.


This dashboard helps you understand where players grind or drop off in your game.


This dashboard helps you balance the flow of “sink” and “gain” resources in your game economy.

You can find a more detailed overview for each dashboard on the GameAnalytics documentation portal.