Tag Archives: mixed reality

Windows Mixed Reality dev kits shipping this month

At Microsoft, we are building Windows 10 to be the most complete platform across the broadest range of mixed reality devices and experiences. We believe that mixed reality can empower new waves of creativity and should be affordable and attainable for everyone.

As we announced last year, we’re partnering with leading device makers including Acer, ASUS, Dell, HP, Lenovo, and 3Glasses on a wide range of headsets that pair with your Windows Mixed Reality-ready PC. These are the first mixed reality headsets to deliver built-in inside-out tracking, meaning there is no need to purchase or install external trackers or sensors in the wall. Moreover, you don’t need to bother with a complicated setup, just plug and play.

Today, at the Game Developers Conference (GDC) in San Francisco, we shared the next step in our mixed reality journey. As we continue to build on the momentum of the past year, starting this month we will begin to ship developer edition headsets created in partnership with Acer to our developer partners.

Here is your first look at the Acer Windows Mixed Reality Development Edition headset!

Acer Windows Mixed Reality Development Edition headset

To empower even more developers to create in mixed reality, we gave a “golden ticket” to game developers who attended our Windows Mixed Reality session at GDC, and those developers will receive the Acer developer edition headset in the coming months.  Game developers interested in building content for Windows Mixed Reality can sign up for ID@Xbox. All other developers can learn about the Windows Mixed Reality program here.

We’re also excited to share that Windows Mixed Reality experiences will light up on other devices over time, beyond desktop and Microsoft HoloLens. Our plan is to bring mixed reality content to the Xbox One family of devices, including Project Scorpio, in 2018.

When we begin the phased rollout of the developer kits this month, the kits will include the Acer headset, along with documentation and access to Windows 10 Insider preview builds and the software development kit (SDK) to enable developers to build mixed reality applications.

The specifications for the Acer Windows Mixed Reality Development Edition headset include:

  • Two high-resolution liquid crystal displays at 1440 x 1440
  • Display refresh rate up to 90 Hz (native)
  • Built-in audio out and microphone support through 3.5mm jack
  • Single cable with HDMI 2.0 (display) and USB 3.0 (data) for connectivity

We can’t wait to see what our developer partners build with our expanding platform, which currently delivers more than 20,000 Universal Windows applications, including mixed reality apps and games as well as the ability to stream Xbox games to your Windows 10 PC. Not only will you enjoy spectacular, immersive experiences, but also the things that you do most with your Windows PC – in mixed reality. Here’s a video we created to give you a glimpse of what is possible with Windows Mixed Reality:

I look forward to sharing more about our mixed reality journey at our //build conference in May. If you ever have any questions, feel free to reach out to me on Twitter @akipman. I’m excited to build this future together with you!

Thanks,

Alex

Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens

The platform game genre has undergone constant evolution, from its earliest incarnations in Donkey Kong and Pitfall to recent variations like Flappy Bird. Shigeru Miyamoto’s Super Mario Bros. is recognized as the best platform game of all time, setting a high bar for everyone who came after. The Lara Croft series built on Shigeru’s innovations by taking the standard side-scrolling platformer and expanding it into a 3D world. With mixed reality and HoloLens, we all have the opportunity to expand the world of the platform game yet again.

Standard video game conventions undergo a profound change when you put a platformer in a mixed reality environment. First of all, instead of sitting in a chair and moving your character inside your display screen, you physically follow your character as he moves around the real world. Second, the obstacles your protagonist encounters aren’t just digital ones but also physical objects in the real world, like tables and chairs and stacks of books. Third, because every room you play in effectively becomes a new level, the mixed reality platform game never runs out of levels and every level presents unique challenges. Instead of comparing scores for a certain game stage, you will need to compare how well you did in the living room—or in Jane’s kitchen or in Shigeru’s basement.

In this post, you will learn how to get started building a platform game for HoloLens using all free assets. In doing so, you will learn the basics of using Spatial Mapping to scan a room so your player character can interact with it. You will also use the slightly more advanced features of Spatial Understanding to determine characteristics of the game environment. Finally, all of this will be done in the Unity IDE (currently 5.5.0f3) with the open source HoloToolkit.

Creating your game world with Spatial Mapping

How does HoloLens make it possible for virtual objects and physical objects to interact?  The HoloLens is equipped with a depth camera, similar to the Kinect v2’s depth camera, that progressively scans a room in order to create a spatial map through a technique known as spatial mapping. It uses this data about the real world to create 3D surfaces in the virtual world. Then, using its four environment-aware cameras, it positions and orients the 3D reconstruction of the room in correct relation to the player. This map is often visualized at the start of HoloLens applications as a web of lines blanketing the room the player is in. You can also sometimes trigger this visualization by simply tapping in the air in front of you while wearing the HoloLens.

To play with spatial mapping, create a new 3D project in Unity. You can call the project “3D Platform Game.” Create a new scene for this game called “main.”

Next, add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene.

The HoloToolkit provides lots of useful helpers and shortcuts for developing a HoloLens app. Under the HoloToolkit menu, there is a Configure option that lets you correctly rig your game for HoloLens. After being sure to save your scene and project, click on each of these options to configure your scene, your project and your capability settings. Under capabilities, you must make sure to check off SpatialPerception—otherwise spatial mapping will not work. Also, be sure to save your project after each change. If for some reason you would prefer to do this step manually, there is documentation available to walk you through it.

To add spatial mapping functionality to your game, all you need to do is drag the SpatialMapping prefab into your scene from HoloToolkit -> SpatialMapping -> Prefabs. If you build and deploy the game to your HoloLens or HoloLens Emulator now, you will be able to see the web mesh of surface reconstruction occurring.

Congratulations! You’ve created your first level.

Adding a protagonist and an Xbox Controller

The next step is to create your protagonist. If you are lucky enough to have a Mario or a Luigi rigged model, you should definitely use that. In keeping with the earlier promise to use only free assets, however, this guide will use the complimentary Ethan asset.

Go to the Unity menu and select Assets -> Import Package -> Characters. Copy the whole package into your game by clicking Import. Finally, drag the ThirdPersonController prefab from Assets -> Standard Assets -> Characters -> ThirdPersonCharacter -> Prefabs into your scene.

Next, you’ll want a Bluetooth controller to steer your character. Newer Xbox One controllers support Bluetooth. To get one to work with HoloLens, you’ll need to closely follow these directions in order to update the firmware on your controller. Then pair the controller to your HoloLens through the Settings -> Devices menu.

To support the Xbox One controller in your game, you should add another free asset. Open the Asset Store by clicking on Window -> Asset Store and search for Xbox Controller Input for HoloLens. Import this package into your project.

You can this up to your character with a bit of custom script. In your scene, select the ThirdPersonController prefab. Find the Third Person User Control script in the Inspector window and delete it. You’re going to write your own custom Control that depends on the Xbox Controller package you just imported.

In the Inspector window again, go to the bottom and click on Add Component -> New Script. Name your script ThirdPersonHoloLensControl and copy/paste the following code into it:


using UnityEngine;
using HoloLensXboxController;
using UnityStandardAssets.Characters.ThirdPerson;

public class ThirdPersonHoloLensControl : MonoBehaviour
{

    private ControllerInput controllerInput;
    private ThirdPersonCharacter m_Character;
    private Transform m_Cam;                
    private Vector3 m_CamForward;            
    private Vector3 m_Move;
    private bool m_Jump;                      

    public float RotateAroundYSpeed = 2.0f;
    public float RotateAroundXSpeed = 2.0f;
    public float RotateAroundZSpeed = 2.0f;

    public float MoveHorizontalSpeed = 1f;
    public float MoveVerticalSpeed = 1f;

    public float ScaleSpeed = 1f;


    void Start()
    {
        controllerInput = new ControllerInput(0, 0.19f);
        // get the transform of the main camera
        if (Camera.main != null)
        {
            m_Cam = Camera.main.transform;
        }

        m_Character = GetComponent<ThirdPersonCharacter>();
    }

    // Update is called once per frame
    void Update()
    {
        controllerInput.Update();
        if (!m_Jump)
        {
            m_Jump = controllerInput.GetButton(ControllerButton.A);
        }
    }


    private void FixedUpdate()
    {
        // read inputs
        float h = MoveHorizontalSpeed * controllerInput.GetAxisLeftThumbstickX();
        float v = MoveVerticalSpeed * controllerInput.GetAxisLeftThumbstickY();
        bool crouch = controllerInput.GetButton(ControllerButton.B);

        // calculate move direction to pass to character
        if (m_Cam != null)
        {
            // calculate camera relative direction to move:
            m_CamForward = Vector3.Scale(m_Cam.forward, new Vector3(1, 0, 1)).normalized;
            m_Move = v * m_CamForward + h * m_Cam.right;
        }


        // pass all parameters to the character control script
        m_Character.Move(m_Move, crouch, m_Jump);
        m_Jump = false;
    }
}

This code is a variation on the standard controller code. Now that it is attached, it will let you use a Bluetooth enabled Xbox One controller to move your character. Use the A button to jump. Use the B button to crouch.

You now have a first level and a player character you can move with a controller: pretty much all the necessary components for a platform game. If you deploy the project as is, however, you will find that there is a small problem. Your character falls through the floor.

This happens because, while the character appears as soon as the scene starts, it actually takes a bit of time to scan the room and create meshes for the floor. If the character shows up before those meshes are placed in the scene, he will simply fall through the floor and keep falling indefinitely because there are no meshes to catch him.

How ‘bout some spatial understanding

In order to avoid this, the app needs a bit of spatial smarts. It needs to wait until the spatial meshes are mostly completed before adding the character to the scene. It should also scan the room and find the floor so the character can be added gently rather than dropped into the room. The spatial understand prefab will help you to accomplish both of these requirements.

Add the Spatial Understanding prefab to your scene. It can be found in Assets -> HoloToolkit -> SpatialUnderstanding -> Prefabs.

Because the SpatialUnderstanding game object also draws a wireframe during scanning, you should disable the visual mesh used by the SpatialMapping game object by deselecting Draw Visual Mesh in its Spatial Mapping Manager script. To do this, select the SpatialMapping game object, find the Spatial Mapping Manager in the Inspector window and uncheck Draw Visual Mesh.

You now need to add some orchestration to the game to prevent the third person character from being added too soon. Select ThirdPersonController in your scene. Then go to the Inspector panel and click on Add Component -> New Script. Call your script OrchestrateGame. While this script could really be placed anywhere, attaching it to the ThirdPersonController will make it easier to manipulate your character’s properties.

Start by adding HideCharacter and ShowCharacter methods to the OrchestrateGame class. This allows you to make the character invisible until you are ready to add him to the game level (the room).


    private void ShowCharacter(Vector3 placement)
    {
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = true;
        m_Character.transform.position = placement;
        var rigidBody = GetComponent<Rigidbody>();
        rigidBody.angularVelocity = Vector3.zero;
        rigidBody.velocity = Vector3.zero;        
    }

    private void HideCharacter()
    {
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = false;
    }

When the game starts, you will initially hide the character from view. More importantly, you will hook into the SpatialUnderstanding singleton and handle it’s ScanStateChanged event. Once the scan is done, you will use spatial understanding to correctly place the character.


    private ThirdPersonCharacter m_Character;

    void Start()
    {
        m_Character = GetComponent<ThirdPersonCharacter>();
        SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged;
        HideCharacter();
    }
    private void Instance_ScanStateChanged()
    {
        if ((SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done) &&
    SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
         {
            PlaceCharacterInGame();
        }
    }

How do you decide when the scan is completed? You could set up a timer and wait for a predetermined length of time to pass. But this might provide inconsistent results. A better way is to take advantage of the spatial understanding functionality in the HoloToolkit.

Spatial understanding is constantly evaluating surfaces picked up by the spatial mapping component. You will set a threshold to decide when you have retrieved enough spatial information. Every time the Update method is called, you will evaluate whether the threshold has been met, as determined by the spatial understanding module. If it is, you call the RequestFinishScan method on SpatialUnderstanding to get it to finish scanning and set its ScanState to Done.


private bool m_isInitialized;
    public float kMinAreaForComplete = 50.0f;
    public float kMinHorizAreaForComplete = 25.0f;
    public float kMinWallAreaForComplete = 10.0f;
    // Update is called once per frame
    void Update()
    {
        // check if enough of the room is scanned
        if (!m_isInitialized && DoesScanMeetMinBarForCompletion)
        {
            // let service know we're done scanning
            SpatialUnderstanding.Instance.RequestFinishScan();
            m_isInitialized = true;
        }
    }

    public bool DoesScanMeetMinBarForCompletion
    {
        get
        {
            // Only allow this when we are actually scanning
            if ((SpatialUnderstanding.Instance.ScanState != SpatialUnderstanding.ScanStates.Scanning) ||
                (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding))
            {
                return false;
            }

            // Query the current playspace stats
            IntPtr statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr();
            if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0)
            {
                return false;
            }
            SpatialUnderstandingDll.Imports.PlayspaceStats stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats();

            // Check our preset requirements
            if ((stats.TotalSurfaceArea > kMinAreaForComplete) ||
                (stats.HorizSurfaceArea > kMinHorizAreaForComplete) ||
                (stats.WallSurfaceArea > kMinWallAreaForComplete))
            {
                return true;
            }
            return false;
        }
    }

Once spatial understanding has determined that enough of the room has been scanned to start the level, you can use spatial understanding one more time to determine where to place your protagonist. First, the PlaceCharacterInGame method, show below, tries to determine the Y coordinate of the room floor. Next, the main camera object is used to determine the direction the HoloLens is facing in order to find a coordinate position two meters in front of the HoloLens. This position is combined with the Y coordinate of the floor in order to place the character gently on the ground in front of the player.


private void PlaceCharacterInGame()
{
// use spatial understanding to find floor
SpatialUnderstandingDll.Imports.QueryPlayspaceAlignment(SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignmentPtr());
SpatialUnderstandingDll.Imports.PlayspaceAlignment alignment = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignment();

// find 2 meters in front of camera position
var inFrontOfCamera = Camera.main.transform.position + Camera.main.transform.forward * 2.0f;

// place character on floor 2 meters ahead
ShowCharacter(new Vector3(inFrontOfCamera.x, alignment.FloorYValue, 2.69f));

// hide mesh
var customMesh = SpatialUnderstanding.Instance.GetComponent<SpatialUnderstandingCustomMesh>();
customMesh.DrawProcessedMesh = false;
}

You complete the PlaceCharacterInGame method by making the meshes invisible to the player. This reinforces the illusion that your protagonist is running into and jumping over objects in the real world. The last thing needed to finish this game, level design, is something that is unfortunately too complex to cover in this platform.

Because this platform game has been developed in mixed reality, you have an interesting choice to make, however, as you design your level. You can do level design the traditional way using 3D models. Alternatively, you can also do it using real world objects which the character must run between and jump over. Finally, the best approach may involve even mixing the two.

Conclusion

To paraphrase Shakespeare, all the world’s a stage and every room in it is a level. Mixed reality has the power to create new worlds for us—but it also has the power to make us look at the cultural artifacts and conventions we already have, like the traditional platform game, in entirely new ways. Where virtual reality is largely about escapism, the secret of mixed reality may simply be that it makes us appreciate what we already have by giving us fresh eyes with which to look at them.

Join us on Feb 8th for Windows Developer Day – Creators Update livestream

On February 8, we’ll be livestreaming a Windows Developer Day, which will outline what’s new for developers in the Windows 10 Creators Update. Whether you’re building for the web or UWP, the latest consumer app or line of business tool, there’s something in it for you. RSVP on the Windows Developer Day site to be the first to know as we share more details in the coming weeks.

Join Kevin Gallo and the Windows engineering team, as they talk through how the latest advances in Windows 10 APIs and tooling enable you to build great things:

  • What’s new with Windows developer tooling: UWP tooling, BASH, Developer mode, and more
  • Learn about the latest XAML advancements, and how UWP helps you build Windows apps that are more personal and productive
  • Hear the developer story behind the recent announcements of Cortana skills and the new Windows mixed reality headsets
  • We’ll also close out the event with a live Q&A panel, where anyone can ask their questions

For this Windows Developer Day, we’re partnering with Channel 9 to share it with the world. We’re also in the process of working with our Windows Developer MVP community to setup local viewing parties around the world, where Windows devs can get together, share tips and network with one another.

Be sure to stay in the loop. Bookmark and RSVP on the Windows Developer Day site to be the first to know as we share more details in the coming weeks.

Device innovation opportunities in mixed reality, gaming, and cellular PCs

In October, I shared how the Windows 10 Creators Update will empower a new wave of creativity, bringing 3D and mixed reality to everyone, enabling every gamer to be a broadcaster, and much more. At our core, we are all creators. Whether an artist, an architect, a teacher or student, a business professional on the go or a hardware engineer building innovative devices for the future, each of us creates using technology in our own way – and we are building Windows for each of you.

Today, we’re at the Windows Hardware Engineering Community event (WinHEC) in Shenzhen, China –where our OEM partners have created more than 300 Windows devices shipping in 75 countries generating more than 8 billion RMB in revenue for Shenzhen partners. We continue this journey with Intel, Qualcomm and hardware engineering creators from around the world. Together, we will build the next generation of modern PCs supporting mixed reality, gaming, advanced security, and artificial intelligence; make mixed reality mainstream; and introduce always-connected, more power efficient cellular PCs running Windows 10.

A new wave of modern PCs

Windows has always been about deep partnerships that marry the best innovation across hardware, software, and services to provide our customers with ground-breaking experiences and great device choices. One of our most important partners making this possible is Intel, and today, I’m thrilled to announce our latest collaboration, codenamed “Project Evo.”

Windows 10 Intel Project Evo

With Project Evo, Microsoft and Intel will deliver all-new ways for devices to light up with the latest in advanced security, artificial intelligence and Cortana, mixed reality, and gaming. Through this collaboration, devices of the future will leverage Microsoft and Intel innovations including:

  • Far-field speech communications so you can ask Cortana a question or play a song from across the room.
  • The latest security capabilities to protect devices from malware and hacking threats, advances in biometric authentication with Windows Hello, sophisticated insights from Microsoft’s Intelligent Security Graph, additional world-class security intelligence, and analytics from Intel.
  • Mixed reality experiences for everyone through affordable PCs and head mounted displays (HMDs) that blend the physical and virtual realities in ways that no other platform can.
  • Gaming innovations like eSports, game broadcasting and support for 4K, High Dynamic Range (HDR), Wide Color Gamut (WCG), spatial audio, and Xbox controllers with native Bluetooth.

Together, our work will extend these experiences to hundreds of millions of PC and HMD customers and raise the bar for what’s possible with Windows PCs.

Making mixed reality mainstream

Windows is the only platform unifying the mixed reality ecosystem, providing inside-out tracking for HMDs, a single platform and standardized inputs for developers, and a consistent interface with a single store for customers.

Windows 10 mixed reality devices

Today, we announced several new ways we’re making mixed reality mainstream in 2017:

  • We submitted Microsoft HoloLens for government approval in China, and we look forward to making it available to developers and commercial customers in China in the first half of 2017.
  • We shared the specifications that we co-developed with Intel for PCs that will power the first headsets capable of mixed reality. HMDs from Acer, ASUS, Dell, HP, and Lenovo will be available next year.
  • Joining those partners, 3Glasses, the leading China-based hardware developer for HMDs, will bring the Windows 10 experience to their S1 device in the first half of 2017, reaching more than 5 million monthly active customers in China.
  • Customers will gain access to amazing mixed reality content. This includes:
    • More than 20,000 universal Windows apps in the catalog
    • 3D objects from the web using Microsoft Edge to drag and drop into their physical world
    • Immersive WebVR content via Microsoft Edge
    • 360 degree videos available for the first time in the Movies & TV app
  • Finally, HMD developer kits will become available to developers at the Game Developers Conference in San Francisco.

Visit this link to join us on the journey to make mixed reality mainstream.

Always connected, more power-efficient PCs coming to Windows 10

Finally, we talked about innovation that empowers creation in a connected, mobile world. Everyone is more mobile today than ever before in large part due to pervasive, faster, and more affordable cellular networks.

In future Windows 10 updates, we will enable connectivity that is always within reach. We will help customers easily buy data directly from the Windows Store and put them in control of how they use Wi-Fi and cellular networks, consume data, and manage costs. We will enable our partners to build always-connected devices without hindering form factor design. Specifically, partners can take advantage of eSIM technology to build devices without an exposed SIM slot, making it easier for people to activate a data plan right on their device.

Windows 10 and Qualcomm announce partnership

Finally, to deliver on our customers’ growing needs to create on the go, we announced today that Windows 10 is coming to ARM through our partnership with Qualcomm. For the first time ever, our customers will be able to experience the Windows they know with all the apps, peripherals, and enterprise capabilities they require, on a truly mobile, power efficient, always-connected cellular PC.

Hardware partners will be able to build a range of new Qualcomm Snapdragon-powered Windows 10 PCs that run x86 Win32 and universal Windows apps, including Adobe Photoshop, Microsoft Office and popular Windows games.

With Windows 10 on cellular PCs, we will help everyone make the most of the air around them. We look forward to seeing these new devices with integrated cellular connectivity and the great experiences people love like touch, pen and Windows Hello, in market as early as next year.

The software and hardware innovations we have seen today position us all to continue to push the boundaries of what’s possible. Together, we can fulfill our mission to build technology that serves all of us, by ensuring there are devices for the creator in each of us.

Terry

Kevin Gallo gives the developer perspective on today’s Windows 10 Event

Did you see the Microsoft Windows 10 Event this morning?  Satya, Terry, and Panos talked about some of the exciting new features coming in the Windows 10 Creators Update and announced some amazing new additions to our Surface family of devices. If you missed the event, be sure to check it out here.

As a developer, my first question when I see new features or new hardware is “What can I do with that?” We want to take advantage of the latest and coolest platform capabilities to make our apps more useful and engaging.

There were several announcements today that offer exciting opportunities for Windows developers.  Three of these that I want to tell you about are:

  • 3D in Windows 10 along with the first VR headsets capable of mixed reality through the Windows 10 Creators update.
  • Ability to put the people you care about most at the center of your experience—right where they belong—with Windows MyPeople
  • Surface Dial, a new input peripheral designed for the creative process that integrates with Windows and is complimentary to other input devices like pen. It gives developers the ability to create unique multi-modal experiences that can be customized based on context. The APIs work in both Universal Windows Platform (UWP) and Win32 apps.

Rather that write a long blog post, I decided to go down to our Channel 9 studios and record a video that gives my thoughts and provides what I hope will be a useful developer perspective on today’s announcements.  Here’s my conversation with Seth Juarez from Channel 9:

My team and I are working hard to finish the platform work that will fully support the Windows 10 Creators Update, but you can start experimenting with many of the things we talked today. Windows Insiders can download the latest flight of the SDK and get started right away.

If you want to dig deeper on the Surface Dial, check out the following links:

Stay tuned to this space for more information in the coming weeks as we get closer to the release of the Windows 10 Creator’s update.  In the meantime, we always love to hear from you and welcome your feedback at the Windows Developer Feedback site.

How to develop augmented reality apps with Vuforia for Windows 10

Augmented Reality is a way to connect virtual objects with the real world, making it possible to naturally interact with them by use of mobile devices like phones, tablets or new mixed reality devices like HoloLens.

Vuforia is one of the most popular Augmented Reality platforms for developers, and Microsoft partnered with Vuforia to bring their application to the Universal Windows Platform (UWP).

Today, we will show you how to create a new Unity project and develop a real AR experience from scratch for devices running Windows 10.

image1

You can download the source for this application here, but I encourage you to follow the steps and build this yourself.

As we’ve noted, augmented reality is the creation of a connection between the real world around you and a virtual world. One of the ways to make this connection is to use real objects like cards or magazines, and then connect them with virtual objects rendered on a digital interface.

What are we going to develop?
This article consists of two parts. In Part 1, we will get you up and running with Vuforia, an augmented reality SDK. This includes creating an account, configuring it and getting the SDK. In Part 2, we will develop an app that detects the front cover of a boating magazine, then render the boat on the front cover in 3D. You can then look around the boat and see it from all different angles.

Part 1: Getting started with Vuforia 6

The first thing we need is an account at https://developer.vuforia.com/.

This is needed so we can get the free license key as well as a place to upload our markers. A marker can be any image, and is used by Vuforia to connect a real world object with our virtual world. In this article, we will use one marker – an image of the font cover of a magazine.

You can download this front cover here:

image2

1) Creating a license
After logging in click Develop, then Add License Key:

image3

This will take you to a form where you can set the details of this license. They can be changed and removed later.

Fill it out like this, using your own application name:

image4

2) Creating our markers
Now that we have a license, we can go ahead and create our markers. All of the markers can be added to a single database. Still in the Develop tab, click Target Manager and Add Database:

image5

Fill out the form that pops up. It is needed to create a database for our markers. This database will be downloaded and added to your app locally – on the device itself – so select Device as the database type:

image6

Once created, click the MagazineCovers entry in the database list to open it:

image7

Now we are ready to add the targets. In the MagazineCover database view, Click Add Target:

image8

A new form will show, where you will need to select the image you want to use, its width and a name. Select the magazine front cover I provided earlier, set the width to 8.5 and name it cover1. Click Add to upload it and generate a marker:

image9

Once uploaded, you will see it in the database view:

image10

Done! Next, we will create a new Unity project and add the Vuforia SDK to it.

3) Creating a new Unity Project

If you don’t have Unity yet, you can go ahead and download it here: http://unity3d.com/. A free personal license is available.

Start Unity, and from the project creation wizard, ensure 3D is selected and name the project “MagAR”:

image11

Then click Create project.

4) Downloading the Vuforia SDK

When the project is created, we need to import the Vuforia SDK for Unity package. It can be downloaded from here (take the latest version): https://developer.vuforia.com/downloads/sdk

image12

Once downloaded, you can simply double-click the packaged file to import it to your solution:

image13

Once extracted, a popup like this will show. Click Import to add the Vuforia SDK to your project. Your solution should look something like this:

image14

5) Adding our Marker Database to our project

Now that we have the Vuforia SDK installed, the last thing we need to do is to add the marker database we created earlier to our project.

Go back to the Vuforia Developer portal, and click the Download Database (All) button from your MagazineCover database:

image15

Select the Editor as the development platform and click Download:

image16

Once compiled and downloaded, you can just open the Unity package file to import it to your project:

image17

You can see from the import dialogue that we got the cover marker, as well as the database itself. Click Import and you are all set to start developing!

Your solution should look something like this:

image18

Part 2: Developing the app!

Now that we have the Vuforia SDK installed as well as the markers we need, the fun can begin.

Vuforia comes with a set of drag and drop assets. You can take a look at them in the Vuforia/Prefrabs folder as seen below:

image19

Vuforia uses a special camera called ARCamera, highlighted above, to enable tracking of markers. Every Vuforia project will need this. This special camera has a lot of settings and configuration possibilities (which we’ll take a look at shortly), and will be able to detect real world objects using, in this case, the front cover of a magazine. Vuforia will then place a virtual anchor on the cover so we can get its virtual position and orientation for use in our virtual world.

Another thing we will need is the target itself. This is the prefab named ImageTarget, and it is also configurable. Let’s go ahead with the development.

1) Adding the ARCamera to our scene and configuring it

a) Add camera
From the Vuforia/Prefabs folder, drag and drop the ARCamera prefab into your scene to add it. You can delete the GameObject called Main Camera from the scene since we want to use the ARCamera as our view into the scene instead:

image20

Next, click the ARCamera prefab to see its properties in the Inspector. This component is the heart of your application and requires some simple setup. The first thing it needs is your app’s License Key.

b) Getting license key
Go to the Vuforia Developer Portal, select your license and copy the entire Vuforia License key from that gray box in the middle of the screen:

image21

c) Setting license key
Next, in the ARCamera inspector, paste the license key to the App License Key box:

image22

d) Setting how many images to track
Another setting we want to verify is the Max Simultaneous Tracked Images setting – we want to have one cover magazine on the table at a given time, so make sure this is set to 1. This can be changed based on your needs:

image23

e) Setting world orientation
Next we want to make sure that we orient the world around our camera, so set the World Center Mode to CAMERA to achieve this:

image24

f) Loading our database
We also want to load and activate the Magazine Covers database, so tick the Load Magazine Covers, and activate it. 

image25

g) Testing the ARCamera
At this point, we should be able to test our ARCamera – it won’t take any virtual actions but, if set up properly, we should be able to see the output from the web camera.

To test, click the play button on top of the scene view. You should be able to see what the camera sees and the Vuforia watermark:

image26

2) Adding our first basic marker

Markers are added to your scene using the ImageTarget prefab. These can then be configured to your liking, as well as selecting what marker it will use for detection. In Unity, each item added to your scene is a GameObject – think of this as your base class. Each GameObjects in your scene can have multiple children and siblings.

The way an ImageTarget works is that it can have child GameObjects and, once the magazine cover is detected, these child GameObjects will become visible. If the card isn’t detected, the children will be hidden.

a) Adding an ImageTarget
Adding an ImageTarget is as simple as adding an ARCamera, just drag and drop the prefab to the scene hierarchy view:

image27

b) Configuring the ImageTarget
We now need to configure which marker the ImageTarget will use. Select the ImageTarget and view its properties. Find the Database and Image Target properties.

First, set the Database to MagazineCovers, then set the Image Target to cover1:

image28

You can see that it automatically populated some of the fields.

c) Spawning a boat on top of the marker!
Now – let’s spawn a boat on top of the marker! I purchased a nice boat from the Unity Asset Store. There are other boats available that may be free: https://www.assetstore.unity3d.com/en/#!/content/23181

Navigate to the folder for your asset, then drag it (or its prefab) onto the ImageTarget so it becomes a child of the ImageTarget.

image29

Then, position/scale the boat so it fits on top of the ImageTarget (the magazine cover).

Looking at the scene view, you can now see the magazine cover with the boat on top of it:

image30

d) Testing if it spawns
Let’s go ahead and run the app again. Place the magazine on the playfield (in front of the camera) and the yacht will become visible on top and will track if you move the marker.

e) Adding details
You can add even more things to the scene, like water, and can change the lighting so your scene becomes more realistic. Feel free to play around with it.

3) Exporting as a UWP

Getting your experience running on a Windows 10 device will make the experience even better, since your tablet has the ability to easily move.

To export the solution from Unity, go to File -> Build Settings:

image31

From this dialogue, set the Platform to Windows Store and the SDK to Universal 10 and click Build. A new dialogue will ask you to select a folder to export to; you can create a new one or select an existing one – it’s up to you. Once the export is done, a new UWP Solution is created in the selected folder.

Go ahead and open this new solution in Visual Studio 2015.

4) Testing the app

Once Visual Studio 2015 has loaded the solution, set the Build Configuration to Master and the Platform to x86, and build and run it on your local machine:

image32

Verify that the application is running and working as it should.

5) Adding a simple UI using XAML

Let’s also add a simple user interface to the app using XAML. To do this, open the MainPage.xaml file from the project tree and view the code. It should simply consist of a SwapChainPanel with a Grid in it, like so:


<SwapChainPanel x:Name="DXSwapChainPanel">
    <Grid x:Name="ExtendedSplashGrid" Background="#FFFFFF">
<Image x:Name="ExtendedSplashImage" Source="Assets/SplashScreen.png" VerticalAlignment="Center" HorizontalAlignment="Center"/>
    </Grid>
</SwapChainPanel>

You might also want to decorate the screen with a logo and some lines to make the UI look neat and clean. To do this, we need a file from the downloadable source (/Assets folder) called SunglobePatrick26x2001.png. Add this to your solutions Assets folder.

Next, change your XAML code to be similar to this:


Next, change your XAML code to be similar to this:
<SwapChainPanel x:Name="DXSwapChainPanel">
    <Grid x:Name="ExtendedSplashGrid" Background="#FFFFFF">
        <Image x:Name="ExtendedSplashImage" Source="Assets/SplashScreen.png" 
VerticalAlignment="Center" HorizontalAlignment="Center"/>
    </Grid>
    <Rectangle Fill="#FFF3C000" HorizontalAlignment="Stretch" Height="3" 
Stroke="#FFF3C000" VerticalAlignment="Top" Margin="360, 64, 24, 0"/>
    <Image VerticalAlignment="Top" HorizontalAlignment="Left" Margin="24, -80, 0, 0" 
Width="300" Source="Assets/SunglobePatrick26x2001.png"></Image>
    <Rectangle Fill="#FFF3C000" HorizontalAlignment="Stretch" Height="3" 
Stroke="#FFF3C000" VerticalAlignment="Bottom" Margin="24, 0, 24, 64"/>
    <CommandBar VerticalAlignment="Bottom" IsOpen="True" Background="#00000000" 
Foreground="#FFF3C000" Margin="0,0,18,0">
        <AppBarButton Icon="Edit" Foreground="#FFF3C000" />
    </CommandBar>
</SwapChainPanel>

What we’re doing here is using the XAML tags to add two rectangles, used as lines, for a minimalistic UI, as well as adding the logo for the boat.

Run the app again to see the UI on top of your rendering canvas:

image33

That’s it! You now know how to develop AR applications for Windows 10 devices!

Wrapping up

To sum up, we created an AR experience for Windows 10 with the following simple steps:
1) Create an account at the Vuforia Developer Portal
2) Acquire a license
3) Created a Unity project using the Vuforia SDK
4) Exporting the Unity project as a UWP app for Windows 10
5) Added a simple UI using XAML

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.