Tag Archives: 3D

Windows Mixed Reality Dev Kits available for pre-order

Anyone following the excitement around virtual reality and augmented reality over the past year is aware of the anticipation surrounding Microsoft’s new Windows Mixed Reality headsets. You understand that a rapidly expanding mixed reality market is just waiting for developers like you to get involved. During Alex Kipman’s Build keynote, we announced that Windows Mixed Reality dev kits from Acer and HP are now available for pre-order through the Microsoft Store for developers in the US (Acer, HP) and Canada (Acer, HP) —please sign up here so we can notify you once dev kits are available in additional countries.

The Acer Windows Mixed Reality Headset Developer Edition is priced at $299 USD and the HP Windows Mixed Reality Headset Developer Edition is priced at $329 USD. The headsets use state-of-the-art, inside-out tracking so you don’t need to set up external cameras or IR emitters to have a truly immersive experience as you move with six degrees of freedom (6DoF) in mixed reality. You’ll be ready to code new mixed reality experiences out of box with a headset and a Windows 10 Creator’s Update PC that meets our recommended hardware specifications for developers. We invite developers to join the Windows Insider program to receive the latest mixed reality experiences from Microsoft each week.

Acer and HP built new mixed reality headsets with different industrial designs to capture the spirit of more personal computing and creativity in Windows. Developers can choose the bright and lightweight headset from Acer or the modern and industrial look of the HP headset with a common set of display and audio features across both headsets:

Acer Windows Mixed Reality Headset Developer Edition HP Windows Mixed Reality Headset Developer Edition
Pre-order in the US Pre-order in the US
Pre-order in Canada Pre-order in Canada
  • Two high-resolution liquid crystal displays at 1440 x 1440
  • Front hinged display
  • 95 degree horizontal field of view
  • Display refresh rate up to 90 Hz (native)
  • Built-in audio out and microphone support through 3.5mm jack
  • Single cable with HDMI 2.0 (display) and USB 3.0 (data) for connectivity
  • Inside-out tracking
  • 4.0 meter cable
  • Two high-resolution liquid crystal displays at 1440 x 1440
  • Front hinged display
  • 95 degrees horizontal field of view
  • Display refresh rate up to 90 Hz (native)
  • Built-in audio out and microphone support through 3.5mm jack
  • Single cable with HDMI 2.0 (display) and USB 3.0 (data) for connectivity
  • Inside-out tracking
  • 4.0m/0.6m removable cable
  • Double-padded headband and easy adjustment knob for all day comfort

As a developer, you can start preparing your machine to build immersive experiences today. Visit the Windows Dev Center to view documentation, download tools and join the emerging community of Windows Mixed Reality developers. Download Unity 3D, the most widely-used developer platform for creating immersive applications. Also download the free Visual Studio 2017 Community edition to package and deploy your immersive apps to the Windows Store. Additionally, you should check to make sure your workstation meets the recommended specifications for developers:

 System Recommendations for App Developers

Processor

  • Desktop: Intel Desktop Core i7 (6+ Core) OR AMD Ryzen 7 1700 (8 Core, 16 threads)

GPU

  • Desktop: NVIDIA GTX 980/1060, AMD Radeon RX 480 (8GB) equivalent or greater DX12 and WDDM 2.2 capable GPU
  • Drivers: Windows Display Driver Model (WDDM) 2.2
  • Thermal Design Power: 15W or greater

Display

  • Headset connectors: 1x available graphics display port for headset (HDMI 1.4 or DisplayPort 1.2 for 60Hz headsets, HDMI 2.0 or DisplayPort 1.2 for 90Hz headsets)
  • Resolution: SVGA (800×600) or greater
  • Bit depth: 32 bits of color per pixel

Memory: 16 GB of RAM or greater

Storage: >10 GB additional free space

Connectivity

  • 1x available USB port for headset (USB 3.0 Type-A). USB must supply a minimum of 900mA.
  • Bluetooth 4.0 (for accessory connectivity)

The Windows Mixed Reality headsets are priced to lower the barriers to create immersive experiences. Mixed reality is now open to you as a developer—and if your Windows PC already meets the minimum specs, you don’t really need anything more to start building games and enterprise apps for the rapidly expanding mixed reality market. We can’t wait to see what you build!

Get all the updates for Windows Developers from Build 2017 here.

Building a Telepresence App with HoloLens and Kinect

When does the history of mixed reality start? There are lots of suggestions, but 1977 always shows up as a significant year. That’s the year millions of children – many of whom would one day become the captains of Silicon Valley – first experienced something they wouldn’t be able to name for another decade or so.

The plea of an intergalactic princess that set off a Star Wars film franchise still going strong today: “Help me Obi-wan Kenobi, you’re my only hope.” It’s a fascinating validation of Marshal McLuhan’s dictum that the medium is the message. While the content of Princess Leia’s message is what we have an emotional attachment to, it is the medium of the holographic projection – today we would call it “augmented reality” or “mixed reality” – that we remember most vividly.

While this post is not going to provide an end-to-end blueprint for your own Princess Leia hologram, it will provide an overview of the technical terrain, point out some of the technical hurdles and point you in the right direction. You’ll still have to do a lot of work, but if you are interested in building a telepresence app for the HoloLens, this post will help you get there.

An external camera and network connection

The HoloLens is equipped with inside-out cameras. In order to create a telepresence app, however, you are going to need a camera that can face you and take videos of you – in other words, an outside-in camera. This post is going to use the Kinect v2 as an outside-in camera because it is widely available, very powerful and works well with Unity. You may choose to use a different camera that provides the features you need, or even use a smartphone device.

The HoloLens does not allow third-party hardware to plug into its mini-USB port, so you will also need some sort of networking layer to facilitate inter-device communication. For this post, we’ll be using the HoloToolkit’s sharing service – again, because it is just really convenient to do so and even has a dropdown menu inside of the Unity IDE for starting the service. You could, however, build your own custom socket solution as Mike Taulty did or use the Sharing with UNET code in the HoloToolkit Examples, which uses a Unity provided networking layer.

In the long run, the two choices that will most affect your telepresence solution are what sort of outside-in cameras you plan to support and what sort of networking layer you are going to use. These two choices will determine the scalability and flexibility of your solution.

Using the HoloLens-Kinect project

Many telepresence HoloLens apps today depend in some way on Michelle Ma’s open-source HoloLens-Kinect project. The genius of the app is that it glues together two libraries, the Unity Pro plugin package for Kinect with the HoloToolkit sharing service, and uses them in unintended ways to arrive at a solution.

Even though the Kinect plugin for Unity doesn’t work in UWP (and the Kinect cannot be plugged into a HoloLens device in any case), it can still run when deployed to Windows or when running in the IDE (in which case it is using the .NET 3.5 framework rather than the .NET Core framework). The trick, then, is to run the Kinect integration in Windows and then send messages to the HoloLens over a wireless network to get Kinect and the device working together.

On the network side, the HoloToolkit’s sharing service is primarily used to sync world anchors between different devices. It also requires that a service be instantiated on a PC to act as a communication bus between different devices. The sharing service doesn’t have to be used as intended, however. Since the service is already running on a PC, it can also be used to communicate between just the PC and a single HoloLens device. Moreover, it can be used to send more than just world anchors – it can really be adapted to send any sort of primitive values – for instance, Kinect joint positions.

To use Ma’s code, you need two separate Unity projects: one for running on a desktop PC and the other for running on the HoloLens. You will add the Kinect plugin package to the desktop app. You will add the sharing prefab from the HoloToolkit to both projects. In the app intended for the HoloLens, add the IP address of your machine to the Server Address field in the Sharing Stage component.

The two apps are largely identical. On the PC side, the app takes the body stream from the Kinect and sends the joint data to a script named BodyView.cs. BodyView creates spheres for each joint when it recognizes a new body and then repositions these joints whenever it gets updated Kinect.


private GameObject CreateBodyObject(ulong id)
{
    GameObject body = new GameObject("Body:" + id);
    for (int i = 0; i < 25; i++)
    {
        GameObject jointObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);

        jointObj.transform.localScale = new Vector3(0.3f, 0.3f, 0.3f);
        jointObj.name = i.ToString();
        jointObj.transform.parent = body.transform;
    }
    return body;
}


private void RefreshBodyObject(Vector3[] jointPositions, GameObject bodyObj)
{
    for (int i = 0; i < 25; i++)
    {
        Vector3 jointPos = jointPositions[i];

        Transform jointObj = bodyObj.transform.FindChild(i.ToString());
        jointObj.localPosition = jointPos;
    }
}

As this is happening, another script called BodySender.cs intercepts this data and sends it to the sharing service. On the HoloLens device, a script named BodyReceiver.cs gets this intercepted joint data and passes it to its own instance of the BodyView class that animates the dot man made up of sphere primitives.

The code used to adapt the sharing service for transmitting Kinect data is contained in Ma’s CustomMessages2 class, which is really just a straight copy of the CustomMessages class from the HoloToolkit sharing example with a small modification that allows joint data to be sent and received:



public void SendBodyData(ulong trackingID, Vector3[] bodyData)
{
    // If we are connected to a session, broadcast our info
    if (this.serverConnection != null && this.serverConnection.IsConnected())
    {
        // Create an outgoing network message to contain all the info we want to send
        NetworkOutMessage msg = CreateMessage((byte)TestMessageID.BodyData);

        msg.Write(trackingID);

        foreach (Vector3 jointPos in bodyData)
        {
            AppendVector3(msg, jointPos);
        }

        // Send the message as a broadcast
        this.serverConnection.Broadcast(
            msg,
            MessagePriority.Immediate,
            MessageReliability.UnreliableSequenced,
            MessageChannel.Avatar);
    }
}

Moreover, once you understand how CustomMessages2 works, you can pretty much use it to send any kind of data you want.

Be one with The Force

Another thing the Kinect is very good at is gesture recognition. HoloLens currently supports a limited number of gestures and is constrained by what the inside-out cameras can see – mostly just your hands and fingers. You can use the Kinect-HoloLens integration above, however, to extend the HoloLens’ repertoire of gestures to include the user’s whole body.

For example, you can recognize when a user raises her hand above her head simply by comparing the relative positions of these two joints. Because this pose recognition only requires the joint data already transmitted by the sharing service and doesn’t need any additional Kinect data, it can be implemented completely on the receiver app running in the HoloLens.


private void DetectGesture(GameObject bodyObj)
{
    string HEAD = "3";
    string RIGHT_HAND = "11";

    // detect gesture involving the right hand and the head
    var head = bodyObj.transform.FindChild(HEAD);
    var rightHand = bodyObj.transform.FindChild(RIGHT_HAND);
        
    // if right hand is half a meter above head, do something
    if (rightHand.position.y > head.position.y + .5)
        _gestureCompleteObject.SetActive(true);
    else
        _gestureCompleteObject.SetActive(false);
}

In this sample, a hidden item is shown whenever the pose is detected. It is then hidden again whenever the user lowers her right arm.

The Kinect v2 has a rich literature on building custom gestures and even provides a tool for recording and testing gestures called the Visual Gesture Builder that you can use to create unique HoloLens experiences. Keep in mind that while many gesture solutions can be run directly in the HoloLens, in some cases, you may need to run your gesture detection routines on your desktop and then notify your HoloLens app of special gestures through a further modified CustomMessages2 script.

As fun as dot man is to play with, he isn’t really that attractive. If you are using the Kinect for gesture recognition, you can simply hide him by commenting a lot of the code in BodyView. Another way to go, though, is to use your Kinect data to animate a 3D character in the HoloLens. This is commonly known as avateering.

Unfortunately, you cannot use joint positions for avateering. The relative sizes of a human being’s limbs are often not going to be the same as those on your 3D model, especially if you are trying to animate models of fantastic creatures rather than just humans, so the relative joint positions will not work out. Instead, you need to use the rotation data of each joint. Rotation data, in the Kinect, is represented by an odd mathematical entity known as a quaternion.

Quaternions

Quaternions are to 3D programming what midichlorians are to the Star Wars universe: They are essential, they are poorly understood, and when someone tries to explain what they are, it just makes everyone else unhappy.

The Unity IDE doesn’t actually use quaternions. Instead it uses rotations around the X, Y and Z axes (pitch, yaw and roll) when you manipulate objects in the Scene Viewer. These are also known as Euler angles.

There are a few problems with this, however. Using the IDE, if I try to rotate the arm of my character using the yellow drag line, it will actually rotate both the green axis and the red axis along with it. Somewhat more alarming, as I try to rotate along just one axis, the Inspector windows show that my rotation around the Z axis is also affecting the rotation around the X and Y axes. The rotation angles are actually interlocked in such a way that even the order in which you make changes to the X, Y and Z rotation angles will affect the final orientation of the object you are rotating. Another interesting feature of Euler angles is that they can sometimes end up in a state known as gimbal locking.

These are some of the reasons that avateering is done using quaternions rather than Euler angles. To better visualize how the Kinect uses quaternions, you can replace dot man’s sphere primitives with arrow models (there are lots you can find in the asset store). Then, grab the orientation for each joint, convert it to a quaternion type (quaternions have four fields rather than the three in Euler angles) and apply it to the rotation property of each arrow.


private static Quaternion GetQuaternionFromJointOrientation(Kinect.JointOrientation jointOrientation)
{
    return new Quaternion(jointOrientation.Orientation.X, jointOrientation.Orientation.Y, jointOrientation.Orientation.Z, jointOrientation.Orientation.W);
}
private void RefreshBodyObject(Vector3[] jointPositions, Quaternion[] quaternions, GameObject bodyObj)
{
    for (int i = 0; i < 25; i++)
    {
        Vector3 jointPos = jointPositions[i];

        Transform jointObj = bodyObj.transform.FindChild(i.ToString());
        jointObj.localPosition = jointPos;
        jointObj.rotation = quaternions[i];
    }
}

These small changes result in the arrow man below who will actually rotate and bend his arms as you do.

For avateering, you basically do the same thing, except that instead of mapping identical arrows to each rotation, you need to map specific body parts to these joint rotations. This post is using the male model from Vitruvius avateering tools, but you are welcome to use any properly rigged character.

Once the character limbs are mapped to joints, they can be updated in pretty much the same way arrow man was. You need to iterate through the joints, find the mapped GameObject, and apply the correct rotation.


private Dictionary<int, string> RigMap = new Dictionary<int, string>()
{
    {0, "SpineBase"},
    {1, "SpineBase/SpineMid"},
    {2, "SpineBase/SpineMid/Bone001/Bone002"},
    // etc ...
    {22, "SpineBase/SpineMid/Bone001/ShoulderRight/ElbowRight/WristRight/ThumbRight"},
    {23, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/HandLeft/HandTipLeft"},
    {24, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/ThumbLeft"}
};

private void RefreshModel(Quaternion[] rotations)
{
    for (int i = 0; i < 25; i++)
    {
        if (RigMap.ContainsKey(i))
        {
            Transform rigItem = _model.transform.FindChild(RigMap[i]);
            rigItem.rotation = rotations[i];
        }
    }
}

This is a fairly simplified example, and depending on your character rigging, you may need to apply additional transforms on each joint to get them to the expected positions. Also, if you need really professional results, you might want to look into using inverse kinematics for your avateering solution.

If you want to play with working code, you can clone Wavelength’s Project-Infrared repository on github; it provides a complete avateering sample using the HoloToolkit sharing service. If it looks familiar to you, this is because it happens to be based on Michelle Ma’s HoloLens-Kinect code.

Looking at point cloud data

To get even closer to the Princess Leia hologram message, we can use the Kinect sensor to send point cloud data. Point clouds are a way to represent depth information collected by the Kinect. Following the pattern established in the previous examples, you will need a way to turn Kinect depth data into a point cloud on the desktop app. After that, you will use shared services to send this data to the HoloLens. Finally, on the HoloLens, the data needs to be reformed as a 3D point cloud hologram.

The point cloud example above comes from the Brekel Pro Point Cloud v2 tool, which allows you to read, record and modify point clouds with your Kinect.

The tool also includes a Unity package that replays point clouds, like the one above, in a Unity for Windows app. The final steps of transferring point cloud data over the HoloToolkit sharing server to HoloLens is an exercise that will be left to the reader.

If you are interested in a custom server solution, however, you can give the open source LiveScan 3D – HoloLens project a try.

HoloLens shared experiences and beyond

There are actually a lot of ways to orchestrate communication for the HoloLens of which, so far, we’ve mainly discussed just one. A custom socket solution may be better if you want to institute direct HoloLens-to-HoloLens communication without having to go through a PC-based broker like the sharing service.

Yet another option is to use a framework like WebRTC for your communication layer. This has the advantage of being an open specification, so there are implementations for a wide variety of platforms such as Android and iOS. It is also a communication platform that is used, in particular, for video chat applications, potentially giving you a way to create video conferencing apps not only between multiple HoloLenses, but also between a HoloLens and mobile devices.

In other words, all the tools for doing HoloLens telepresence are out there, including examples of various ways to implement it. It’s now just a matter of waiting for someone to create a great solution.

New MapControl features in Windows 10 Creators Update

We have updated the Maps platform for the Windows 10 Creators Update to give our maps a cleaner, more beautiful and realistic look so that it’s consistent between web and UWP apps. We are also making Road view look more authentic by adding layers of terrain, where previously the Road view appeared flat. In addition to an updated 3D engine, we have delivered added features that our users requested for certain areas of visual improvements, like styling, offline capabilities, routing and others.

Just a quick note regarding the improvements to the engine: even though we go through many compatibility tests and make our best effort to minimize impact to third-party apps, it is always possible that something might have slipped through. This would be a good time to review your apps and confirm that the updated Maps platform is working as expected for your scenarios.

With that out of the way, please see the highlights below around some of the top asked-for features.

Map Styling APIs

We are happy to announce a set of Map Styling APIs for Windows 10 Map Control. The styling APIs will allow you to customize the look and feel of the map canvas on the fly. As a developer you will be able to control the map rendering by dynamically disabling or changing the styling characteristics of a layer or to emphasize certain aspects of the map canvas.

Map customization features are supported for regions where Windows 10 Map Control performs vector rendering, which includes all markets except for China, Japan and South Korea. Since vector mode supports offline storage for all layers, the maps customization feature is available for both online and offline modes.

Customizing the map

You can customize the look and feel of the map by using the new MapStyleSheet and setting the StyleSheet property of the MapControl. Think of a map stylesheet as a set of custom rules defined in JSON markup which can be combined to override our styling defaults. It allows you to customize colors, fonts and visibility status of various map elements, such as roads, areas (e.g. building structures, parks, water) and political features (e.g. city titles).

Here are some great examples of re-styling layers or specific primitives within a layer in the Windows 10 Map Control:

Spooky Map

Some of you might remember the Spooky Map that we released over a year ago in Bing to celebrate one of our favorite holidays. Back then we had just revamped our styling system and our team had a lot of fun coming up with this Halloween theme.

The Spooky road map style is rendered by Windows 10 Map Control through changing the land color, the color for the neighborhood labels and the fill color for areas such as airports, cemeteries and education structures.

Winter Map

The Winter road map style is rendered by Windows 10 Map Control through changing the land color, the color for neighborhood labels and the fill color for the areas such as cemeteries, education structures and military bases.

Gray Map

The Gray road map style is rendered by Windows 10 Map Control through changing the land and water color, the color for all labels and the fill color for all areas and map elements such as roads, railways, runways, transportation network lines and structures.

3D Map Engine

The map engine that is shipping with Windows RS2 update is a 3D map viewer.  It displays objects on top of the terrain and uses globe or web Mercator projection model for vector rendering and map interactions. Vectors are full 3D objects in a 3D scene. To place 3D objects correctly, the engine uses elevation data on the vertices of the vector geometry. If you don’t supply the altitude values to render points and polylines, they will simply be draped over the terrain surface.

Here are some of the major changes to keep in mind.

3D Scenes

Both Road and Aerial maps now support 3D views and elevation data. As you might remember, a 3D perspective of the map can be specified by using MapScene. While the map scene represents the 3D view that appears in the map, the MapCamera represents the position of the camera that would display such a view.

Labels created by the 3D map engine are placed laying down or standing up in the 3D scene to improve readability and visual quality. They also use a distance-fade occlusion rule with other 3D geometry indicative of their actual position in the scene. Because the map view can show both oblique and nadir views, as well as 3D topology, it is important to carefully set your view so that the obstacles, (such as mountains) do not get in your way. To help with this, the control supports the concept of scenes as a primary tool for establishing the best views. Via TrySetSceneAsync methods, you can establish different perspectives and the map will automatically choose the best camera for that perspective based on the environmental factors—including the user’s current view within the map.

Las Vegas Strip, oblique view from the east

For more details, see Display Maps with 2D, 3D and Streetside Views.

Displaying points of interest (POI) on the map

Typically, with marking points of interest (POI) on a map the first thing you consider is using pushpins, images, shapes and/or XAML UI elements. However, one of the main things to consider when adding points of interest to a 3D map should also be altitude and the AltitudeReferenceSystem to be used.

You’ll need an altitude reference system to indicate what the altitude value is relative to. If you specify Terrain, the altitude value will be relative to the terrain and will not include surface objects like trees or buildings. Ellipsoid altitude value will be relative to WGS84 ellipsoid, while Surface altitude value will be relative to the surface and will include objects such as trees and buildings that are on top of the terrain. Geoid altitude values are currently not supported by the Maps API.

Cattedrale di Santa Maria del Fiore, pushpin using zero surface altitude

Using different map projections

The map engine supports both a standard Web Mercator projection and a 3D globe projection now. The developer specifies the map projection of the MapControl that you want to use through the new MapProjection property.

MapBillboard

Along with the 3D enhancements to the existing MapElements, we added a new MapElement called MapBillboard. This new API can be used to display images or signage on the 3D map. Similar to the MapIcon API, MapBillboard displays an Image at a specific location on the map. However, it behaves differently in that it acts as if it was part of the 3D scene: the image scales with the rest of the 3D scene as the camera zooms and pans.

Offline Maps

In the past developers had to direct users to the Settings app for users to download Offline Maps. To streamline these scenarios, we added the OfflineMapPackage API which allows you to find downloaded packages in a given area (Geopoint, GeoboundingBox, etc). You can check and listen for downloaded status on these packages as well as trigger a download without the user having to leave your app.

https://github.com/Microsoft/Windows-universal-samples/tree/dev/Samples/MapControl

Other changes

Area

Description

3D textured landmarks 3D textured buildings are missing with this update, but we are working on getting these back.

API Updates and Additions

For a list of the APIs added since Windows 10 Anniversary Update, please see here the following resources:

For more details on all new APIs go to MSDN.

Complete Anatomy: Award-Winning App Comes to Windows Store

3D4Medical has just completed the port of its award-winning flagship product Complete Anatomy to Windows Store using the Windows Bridge for iOS. The Windows Bridge is an open-source environment for Objective-C that provides support for third party APIs. The Windows Bridge was a very important component in 3D4Medical’s development team, bringing the high-resolution 3D models and smooth touch interface that its users were already familiar with to the world of Windows PC and Surface users.

3D4Medical created a Universal Windows Platform (UWP) app in response to the huge demand from its core audience of medical students and clinical professionals, many of whom use Windows devices. The app supports multiple Windows form-factors and device configurations. The interface can be manipulated with either a mouse or touch gestures. The experience particularly shines on Surfaces and other pen capable devices, where you can take advantage of Windows Ink support for smooth drawing and annotation.

Smooth interactions

As you can see in the example below, the UWP version of Complete Anatomy provides a rich user experience with smooth transitions and an elegant menu system that provides access to updatable quizzes and anatomy tutorials that are shared across iOS and Windows devices. With your finger, a Windows Pen or a mouse, you can quickly rotate skeletons to view points of interest from multiple perspectives. The high-definition models also scale smoothly as you zoom and pan over points of articulation and various anatomical systems.

Muscular, arterial, lymphatic, nervous, respiratory and digestive systems can be toggled on and off, annotated, labeled, drawn on and even saved for later reference. The app currently leverages Windows Ink for convenient pen interactions.

Wrapping Up

In porting Complete Anatomy to the Windows Store, 3D4Medical demonstrates that the Windows Bridge can help developers bring feature-rich, award-winning design to PCs and Surfaces in a short time span. This app shines on all Surface devices, whether it’s the Surface Pro line, the Surface Book or even an 84″ Surface Hub. Complete Anatomy brings over not only all the high-fidelity models and natural interactions already developed, but also extends the product with Windows capabilities like Windows Ink.

To learn more about cross-platform development, please refer to the documentation and articles linked below:

Building the Terminator Vision HUD in HoloLens

James Cameron’s 1984 film The Terminator introduced many science-fiction idioms we now take for granted. One of the most persistent is the thermal head-up-display (HUD) shot that allows the audience to see the world through the eyes of Arnold Schwarzenegger’s T-800 character. In design circles, it is one of the classic user interfaces that fans frequently try to recreate both as a learning tool and as a challenge.

In today’s post, you’ll learn how to recreate this iconic interface for the HoloLens. To sweeten the task, you’ll also hook up this interface to Microsoft Cognitive Services to perform an analysis of objects in the room, face detection and even some Optical Character Recognition (OCR).

While on the surface this exercise is intended to just be fun, there is a deeper level. Today, most computing is done in 2D. We sit fixed at our desks and stare at rectangular screens. All of our input devices, our furniture and even our office spaces are designed to help us work around 2D computing. All of this will change over the next decade.

Modern computing will eventually be overtaken by both 3D interfaces and 1-dimensional interfaces. 3D interfaces are the next generation of mixed reality devices that we are all so excited about. 1D interfaces, driven by advances in AI research, are overtaking our standard forms of computing more quietly, but just as certainly.

By speaking or looking in a certain direction, we provide inputs to AI systems in the cloud that can quickly analyze our world and provide useful information. When 1D and 3D are combined—as you are going to do in this walkthrough—a profoundly new type of experience is created that may one day lead to virtual personal assistants that will help us to navigate our world and our lives.

The first step happens to be figuring out how to recreate the T-800 thermal HUD display.

Recreating the UI

Start by creating a new 3D project in Unity and call it “Terminator Vision.” Create a new scene called “main.” Add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene. In the menu for your Unity IDE, click on HoloToolkit -> Configure to set up your project to target HoloLens.

Once your project and your scene are properly configured, the first thing to add is a Canvas object to the scene to use as a surface to write on. In the hierarchy window, right-click on your “main” scene and select GameObject -> UI -> Canvas from the context menu to add it. Name your Canvas “HUD.”

The HUD also needs some text, so the next step is to add a few text regions to the HUD. In the hierarchy view, right-click on your HUD and add four Text objects by selecting UI -> Text. Call them BottomCenterText, MiddleRightText, MiddleLeftText and MiddleCenterText. Add some text to help you match the UI to the UI from the Terminator movie. For the MiddleRightText add:

SCAN MODE 43984

SIZE ASSESSMENT

ASSESSMENT COMPLETE

 

FIT PROBABILITY 0.99

 

RESET TO ACQUISITION

MODE SPEECH LEVEL 78

PRIORITY OVERRIDE

DEFENSE SYSTEMS SET

ACTIVE STATUS

LEVEL 2347923 MAX

For the MiddleLeftText object, add:

ANALYSIS:

***************

234654 453 38

654334 450 16

245261 856 26

453665 766 46

382856 863 09

356878 544 04

664217 985 89

For the BottomCenterText, just write “MATCH.” In the scene panel, adjust these Text objects around your HUD until they match with screenshots from the Terminator movie. MiddleCenterText can be left blank for now. You’re going to use it later for surfacing debug messages.

Getting the fonts and colors right are also important – and there are lots of online discussions around identifying exactly what these are. Most of the text in the HUD is probably Helvetica. By default, Unity in Windows assigns Arial, which is close enough. Set the font color to an off-white (236, 236, 236, 255), font-style to bold, and the font size to 20.

The font used for the “MATCH” caption at the bottom of the HUD is apparently known as Heinlein. It was also used for the movie titles. Since this font isn’t easy to find, you can use another font created to emulate the Heinlein font called Modern Vision, which you can find by searching for it on internet. To use this font in your project, create a new folder called Fonts under your Assets folder. Download the custom font you want to use and drag the TTF file into your Fonts folder. Once this is done, you can simply drag your custom font into the Font field of BottomCenterText or click on the target symbol next to the value field for the font to bring up a selection window. Also, increase the font size for “MATCH” to 32 since the text is a bit bigger than other text in the HUD.

In the screenshots, the word “MATCH” has a white square placed to its right. To emulate this square, create a new InputField (UI -> Input Field) under the HUD object and name it “Square.” Remove the default text, resize it and position it until it matches the screenshots.

Locking the HUD into place

By default, the Canvas will be locked to your world space. You want it to be locked to the screen, however, as it is in the Terminator movies.

To configure a camera-locked view, select the Canvas and examine its properties in the Inspector window. Go to the Render Mode field of your HUD Canvas and select Screen Space – Camera in the drop down menu. Next, drag the Main Camera from your hierarchy view into the Render Camera field of the Canvas. This tells the canvas which camera perspective it is locked to.

The Plane Distance for your HUD is initially set to one meter. This is how far away the HUD will be from your face in the Terminator Vision mixed reality app. Because HoloLens is stereoscopic, adjusting the view for each eye, this is actually a bit close for comfort. The current focal distance for HoloLens is two meters, so we should set the plane distance at least that far away.

For convenience, set Plane Distance to 100. All of the content associated with your HUD object will automatically scale so it fills up the same amount of your visual field.

It should be noted that locking visual content to the camera, known as head-locking, is generally discouraged in mixed reality design as it can cause visual comfort. Instead, using body-locked content that tags along with the player is the recommended way to create mixed reality HUDs and menus. For the sake of verisimilitude, however, you’re going to break that rule this time.

La vie en rose

Terminator view is supposed to use heat vision. It places a red hue on everything in the scene. In order to create this effect, you are going to play a bit with shaders.

A shader is a highly optimized algorithm that you apply to an image to change it. If you’ve ever worked with any sort of photo-imaging software, then you are already familiar with shader effects like blurring. To create the heat vision colorization effect, you would configure a shader that adds a transparent red distortion to your scene.

If this were a virtual reality experience, in which the world is occluded, you would apply your shader to the camera using the RenderWithShader method. This method takes a shader and applies it to any game object you look at. In a holographic experience, however, this wouldn’t work since you also want to apply the distortion to real-life objects.

In the Unity toolbar, select Assets -> Create -> Material to make a new material object. In the Shader field, click on the drop-down menu and find HoloToolkit -> Lambertian Configurable Transparent. The shaders that come with the HoloToolkit are typically much more performant in HoloLens apps and should be preferred. The Lambertian Configurable Transparent shader will let you select a red to apply; (200, 43, 38) seems to work well, but you should choose the color values that look good to you.

Add a new plane (3D Object -> Plane) to your HUD object and call it “Thermal.” Then drag your new material with the configured Lambertian shader onto the Thermal plane. Set the Rotation of your plane to 270 and set the Scale to 100, 1, 100 so it fills up the view.

Finally, because you don’t want the red colorization to affect your text, set the Z position of each of your Text objects to -10. This will pull the text out in front of your HUD a little so it stands out from the heat vision effect.

Deploy your project to a device or the emulator to see how your Terminator Vision is looking.

Making the text dynamic

To hook up the HUD to Cognitive Services, first orchestrate a way to make the text dynamic. Select your HUD object. Then, in the Inspector window, click on Add Component -> New Script and name your script “Hud.”

Double-click Hud.cs to edit your script in Visual Studio. At the top of your script, create four public fields that will hold references to the Text objects in your project. Save your changes.


public Text InfoPanel;
    public Text AnalysisPanel;
    public Text ThreatAssessmentPanel;
    public Text DiagnosticPanel;

If you look at the Hud component in the Inspector, you should now see four new fields that you can set. Drag the HUD Text objects into these fields, like so.

In the Start method, add some default text so you know the dynamic text is working.


  void Start()
    {
        AnalysisPanel.text = "ANALYSIS:n**************ntestntestntest";
        ThreatAssessmentPanel.text = "SCAN MODE XXXXXnINITIALIZE";
        InfoPanel.text = "CONNECTING";
 //...
    }

When you deploy and run the Terminator Vision app, the default text should be overwritten with the new text you assign in Start. Now set up a System.Threading.Timer to determine how often you will scan the room for analysis. The Timer class measures time in milliseconds. The first parameter you pass to it is a callback method. In the code shown below, you will call the Tick method every 30 seconds. The Tick method, in turn, will call a new method named AnalyzeScene, which will be responsible for taking a photo of whatever the Terminator sees in front of him using the built-in color camera, known as the locatable camera, and sending it to Cognitive Services for further analysis.


    System.Threading.Timer _timer;
    void Start()
    {
        //...

        int secondsInterval = 30;
        _timer = new System.Threading.Timer(Tick, null, 0, secondsInterval * 1000);

    }

    private void Tick(object state)
    {
        AnalyzeScene();
    }

Unity accesses the locatable camera in the same way it would normally access any webcam. This involves a series of calls to create the photo capture instance, configure it, take a picture and save it to the device. Along the way, you can also add Terminator-style messages to send to the HUD in order to indicate progress.


    void AnalyzeScene()
    {
        InfoPanel.text = "CALCULATION PENDING";
        PhotoCapture.CreateAsync(false, OnPhotoCaptureCreated);
    }

    PhotoCapture _photoCaptureObject = null;
    void OnPhotoCaptureCreated(PhotoCapture captureObject)
    {
        _photoCaptureObject = captureObject;

        Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();

        CameraParameters c = new CameraParameters();
        c.hologramOpacity = 0.0f;
        c.cameraResolutionWidth = cameraResolution.width;
        c.cameraResolutionHeight = cameraResolution.height;
        c.pixelFormat = CapturePixelFormat.BGRA32;

        captureObject.StartPhotoModeAsync(c, OnPhotoModeStarted);
    }

    private void OnPhotoModeStarted(PhotoCapture.PhotoCaptureResult result)
    {
        if (result.success)
        {
            string filename = string.Format(@"terminator_analysis.jpg");
            string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename);
            _photoCaptureObject.TakePhotoAsync(filePath, PhotoCaptureFileOutputFormat.JPG, OnCapturedPhotoToDisk);
        }
        else
        {
            DiagnosticPanel.text = "DIAGNOSTICn**************nnUnable to start photo mode.";
            InfoPanel.text = "ABORT";
        }
    } 

If the photo is successfully taken and saved, you will grab it, serialize it as an array of bytes and send it to Cognitive Services to retrieve an array of tags that describe the room as well. Finally, you will dispose of the photo capture object.


    void OnCapturedPhotoToDisk(PhotoCapture.PhotoCaptureResult result)
    {
        if (result.success)
        {
            string filename = string.Format(@"terminator_analysis.jpg");
            string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename);

            byte[] image = File.ReadAllBytes(filePath);
            GetTagsAndFaces(image);
            ReadWords(image);
        }
        else
        {
            DiagnosticPanel.text = "DIAGNOSTICn**************nnFailed to save Photo to disk.";
            InfoPanel.text = "ABORT";
        }
        _photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
    }

    void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
    {
        _photoCaptureObject.Dispose();
        _photoCaptureObject = null;
    }

In order to make a REST call, you will need to use the Unity WWW object. You also need to wrap the call in a Unity coroutine in order to make the call non-blocking. You can also get a free Subscription Key to use the Microsoft Cognitive Services APIs just by signing up.


    string _subscriptionKey = "b1e514eYourKeyGoesHere718c5";
    string _computerVisionEndpoint = "https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Tags,Faces";
    public void GetTagsAndFaces(byte[] image)
    {
            coroutine = RunComputerVision(image);
            StartCoroutine(coroutine);
    }

    IEnumerator RunComputerVision(byte[] image)
    {
        var headers = new Dictionary<string, string>() {
            { "Ocp-Apim-Subscription-Key", _subscriptionKey },
            { "Content-Type", "application/octet-stream" }
        };

        WWW www = new WWW(_computerVisionEndpoint, image, headers);
        yield return www;

        List<string> tags = new List<string>();
        var jsonResults = www.text;
        var myObject = JsonUtility.FromJson<AnalysisResult>(jsonResults);
        foreach (var tag in myObject.tags)
        {
            tags.Add(tag.name);
        }
        AnalysisPanel.text = "ANALYSIS:n***************nn" + string.Join("n", tags.ToArray());

        List<string> faces = new List<string>();
        foreach (var face in myObject.faces)
        {
            faces.Add(string.Format("{0} scanned: age {1}.", face.gender, face.age));
        }
        if (faces.Count > 0)
        {
            InfoPanel.text = "MATCH";
        }
        else
        {
            InfoPanel.text = "ACTIVE SPATIAL MAPPING";
        }
        ThreatAssessmentPanel.text = "SCAN MODE 43984nTHREAT ASSESSMENTnn" + string.Join("n", faces.ToArray());
    }

The Computer Vision tagging feature is a way to detect objects in a photo. It can also be used in an application like this one to do on-the-fly object recognition.

When the JSON data is returned from the call to cognitive services, you can use the JsonUtility to deserialize the data into an object called AnalysisResult, shown below.


    public class AnalysisResult
    {
        public Tag[] tags;
        public Face[] faces;

    }

    [Serializable]
    public class Tag
    {
        public double confidence;
        public string hint;
        public string name;
    }

    [Serializable]
    public class Face
    {
        public int age;
        public FaceRectangle facerectangle;
        public string gender;
    }

    [Serializable]
    public class FaceRectangle
    {
        public int height;
        public int left;
        public int top;
        public int width;
    }

One thing to be aware of when you use JsonUtility is that it only works with fields and not with properties. If your object classes have getters and setters, JsonUtility won’t know what to do with them.

When you run the app now, it should update the HUD every 30 seconds with information about your room.

To make the app even more functional, you can add OCR capabilities.


string _ocrEndpoint = "https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr";
public void ReadWords(byte[] image)
{
    coroutine = Read(image);
    StartCoroutine(coroutine);
}


IEnumerator Read(byte[] image)
{
var headers = new Dictionary<string, string>() {
    { "Ocp-Apim-Subscription-Key", _subscriptionKey },
    { "Content-Type", "application/octet-stream" }
};

WWW www = new WWW(_ocrEndpoint, image, headers);
yield return www;

List<string> words = new List<string>();
var jsonResults = www.text;
var myObject = JsonUtility.FromJson<OcrResults>(jsonResults);
foreach (var region in myObject.regions)
foreach (var line in region.lines)
foreach (var word in line.words)
{
    words.Add(word.text);
}

string textToRead = string.Join(" ", words.ToArray());
if (myObject.language != "unk")
{
    DiagnosticPanel.text = "(language=" + myObject.language + ")n" + textToRead;
}
}

This service will pick up any words it finds and redisplay them for the Terminator.

It will also attempt to determine the original language of any words that it finds, which in turn can be used for further analysis.

Conclusion

In this post, you discovered how to recreate a cool visual effect from an iconic sci-fi movie. You also found out how to call Microsoft Cognitive Services from Unity in order to make a richer recreation.

You can extend the capabilities of the Terminator Vision app even further by taking the text you find through OCR and calling Cognitive Services to translate it into another language using the Translator API. You could then use the Bing Speech API to read the text back to you in both the original language and the translated language. This, however, goes beyond the original goal of recreating the Terminator Vision scenario from the 1984 James Cameron film and starts sliding into the world of personal assistants, which is another topic for another time.

View the source code for Terminator Vision on Github here.

Bringing 3D to everyone through open standards

Earlier this week, at the Microsoft Windows 10 Event, we shared our vision (read more about it from Terry Myerson and Megan Saunders) around 3D for everyone in New York. As part of achieving that vision we are delighted to share that Microsoft is joining the 3D Formats working group at Khronos to collaborate on its GL Transmission Format (glTF).

At Microsoft, we are committed to an open and interoperable 3D content development ecosystem.  As 3D content becomes more pervasive, there is a need for a common, open and interoperable language to describe, edit, and share 3D assets between different applications. glTF fills this need as an expressive and capable open standard.

We look forward to collaborating with the community and our industry partners to help glTF deliver on its objectives and achieve broad support across many devices and applications. To further the openness goal, we will continue our open source contributions including further development of glTF support in the open source frameworks such as BabylonJS.

As the working group starts thinking about the next version, we are especially interested in joining discussions about some of the subjects that have seen the biggest community momentum in the public forums. Physically Based Rendering (PBR) material proposal is one of those topics. PBR materials are a flexible way for 3D content creators to specify the rendering characteristics of their surfaces. Industry-standard implementations can ensure that any PBR content will look consistent irrespective of the scene lighting and environment. Additionally, because PBR material definition is a high-level abstraction that is not tied to any specific platform, 3D assets with PBR materials can be rendered consistently across platforms.

This kind of cross-platform, cross-application power is what will ultimately make glTF truly ubiquitous and Microsoft is proud to be part of this journey.
Forest W. Gouin – Windows Experiences Group
Jean Paoli – Windows Developer Platform

Kevin Gallo gives the developer perspective on today’s Windows 10 Event

Did you see the Microsoft Windows 10 Event this morning?  Satya, Terry, and Panos talked about some of the exciting new features coming in the Windows 10 Creators Update and announced some amazing new additions to our Surface family of devices. If you missed the event, be sure to check it out here.

As a developer, my first question when I see new features or new hardware is “What can I do with that?” We want to take advantage of the latest and coolest platform capabilities to make our apps more useful and engaging.

There were several announcements today that offer exciting opportunities for Windows developers.  Three of these that I want to tell you about are:

  • 3D in Windows 10 along with the first VR headsets capable of mixed reality through the Windows 10 Creators update.
  • Ability to put the people you care about most at the center of your experience—right where they belong—with Windows MyPeople
  • Surface Dial, a new input peripheral designed for the creative process that integrates with Windows and is complimentary to other input devices like pen. It gives developers the ability to create unique multi-modal experiences that can be customized based on context. The APIs work in both Universal Windows Platform (UWP) and Win32 apps.

Rather that write a long blog post, I decided to go down to our Channel 9 studios and record a video that gives my thoughts and provides what I hope will be a useful developer perspective on today’s announcements.  Here’s my conversation with Seth Juarez from Channel 9:

My team and I are working hard to finish the platform work that will fully support the Windows 10 Creators Update, but you can start experimenting with many of the things we talked today. Windows Insiders can download the latest flight of the SDK and get started right away.

If you want to dig deeper on the Surface Dial, check out the following links:

Stay tuned to this space for more information in the coming weeks as we get closer to the release of the Windows 10 Creator’s update.  In the meantime, we always love to hear from you and welcome your feedback at the Windows Developer Feedback site.

Sparking learning at YouthSpark summer camps, 75M devices running Windows 10, and a regular cellphone turns into a 3D scanner — Weekend Reading: Aug. 28 edition

YouthSpark, education, summer camps

At age 8, Allyse Nguyen is among the youngest students in the Smart Game Design class in Bellevue, Washington. (Photography by Scott Eklund/Red Box Pictures)

With summer waning, most students are just getting ready to head back to school. But there are some who decided to continue learning over the break, and specifically, to dive into the world of coding. Read on for this story and more from the week at Microsoft, where the phrase “summer slowdown” is an oxymoron.

Around the U.S. and in Canada, children ages 8 and up spent part of their summer attending YouthSpark Summer Camps, held at 76 Microsoft stores. The camps, which will also be offered this fall, teach children how to code, create games, use their creativity and imagination, and learn to think critically. “I like that sometimes coding can be simple, but it can also do so much more,” says Andrew Stephens, 11, an incoming sixth grader.

YouthSpark, education, summer camps

Andrew Stephens, left, with dad Andy Stephens, was among the students who learned about coding at YouthSpark Summer Camps. (Photo courtesy of Andy Stephens)

Meanwhile, 80 teens took part in a day-long STEM exploration event at Microsoft’s Redmond campus, where there was no shortage of big ideas and passion for STEM (science, technology, engineering and math). Microsoft partnered with Seattle nonprofit iUrban Teen for the day of technology immersion, which included a diverse group of speakers from Microsoft, the White House, Yale University and “Grey’s Anatomy.” “It was really cool, seeing how people have all these great ideas for fun and useful things,” said 14-year-old Geno L. White II. “We have the same dreams that they do.”

education, STEM, iUrban Teen STEM

Geno L. White II (left) and Ceon Duncan-Graves check out a ball that was created with a 3D printer at The Microsoft Garage during the Microsoft iUrban Teen STEM Exploration Day. (Photography by Scott Eklund/Red Box Pictures)

75 million devices are now running Windows 10, a stat shared by Yusuf Mehdi, corporate vice president of Marketing for Windows and Devices, on Twitter, along with other tidbits of Windows 10 trivia, such as: Windows 10 is available in 192 countries, virtually every country on the planet; more than 122 years of gameplay have streamed from Xbox One to Windows 10 devices; and in response to “Tell me a joke,” Cortana has told over half-a-million of ‘em since launch.

Yusuf Mehdi, Windows 10

Yusuf Mehdi, corporate vice president of Marketing for Windows and Devices.

A new Microsoft Research project delivers high-quality 3D images in real time, using a regular mobile phone. And it takes about the same effort as snapping a picture or shooting a video. Researchers say the system, called MobileFusion, is better than other methods for 3D scanning with a mobile device because it doesn’t need any extra hardware, or even an Internet connection, to work. That means scientists in remote locations, or hikers deep in the woods, can capture their surroundings using a cellphone, without a Wi-Fi connection. Sweet.

Two inexpensive, Internet-enabled feature phones, the Nokia 222 and Nokia 222 Dual SIM, were announced this week. The phones are designed to connect more people to the Internet, and let them capture and share their photos with others using apps such as GroupMe by Skype, Facebook, Messenger and Twitter. The Nokia 222 and Nokia 222 Dual SIM will be available globally in select markets, starting in September, priced at $37 before local taxes and subsidies.

Nokia 222, feature phones

The Nokia 222 and Nokia 222 Dual SIM.

Cortana took on more workload this week: She’s now available as an app in beta to all Android phone users. The personal digital assistant will also be coming to iOS devices, as was shared in May. The app for Android can do most of the things Cortana does on your PC or on a Windows phone (even tell jokes).

Run for a touchdown, run circles – or both. Get the Xbox One EA Sports Madden NFL 16 Bundle, which includes a 1TB hard drive, a full-game download of Madden NFL 16 and one year of EA Access. It’s now available for $399 from Microsoft and other retailers. And this week’s “App of the Week” is “Running Circles,” a free game that’s new to the Windows Store, and tests players’ timing and reflexes on a constantly changing, spinning and dizzying path.

games, Windows Phone

“Running Circles”

This week we met Wanderson Skrock, a young man who grew up in a rough neighborhood of Brazil and was in jail twice before age 17. However, Skrock turned his life around with technology and now he’s teaching digital literacy classes in Brazil’s correctional institutions and working with Microsoft YouthSpark.

That’s it for this edition of Weekend Reading! We’ll see you next week!

Posted by Suzanne Choney
Microsoft News Center Staff

Navigate the World with Windows 10 Maps for Phone

Hello! I’m Aaron Butcher and I’m the Group Program Manager for the Maps Team in the Windows PC, Phone, and Tablet Group. In the January Windows 10 Technical Preview, Windows Insiders got a chance to try out our new Maps app for the PC. We’re excited that the new Maps app is also available now on phones with the latest build of the Windows 10 Technical Preview (Build 10051).  If you’re a Windows Insider, please give it a try and let us know what you think.

Based on the new Windows Universal app platform, Maps delivers a single and consistent mapping experience across all your Windows devices. Whether you’re on your PC in the office, or using your Windows phone on the go, Maps offers you the features and tools you need to explore and navigate the world. This includes the best maps, aerial imagery, rich local search data, and voice guided navigation experiences from both Bing Maps and HERE maps, integrated together for the first time into a single app for Windows.

Maps on Windows 10 Technical Preview for phones:

1 Maps on Win 10 Phone Map 2 Maps on Win 10 Phone Hamburger Menu

Maps on Windows 10 Technical Preview for PCs:

3 Maps on Win 10 PC App

Here’s a quick overview of what you’ll find in our first Maps preview for the phone.

Maps are great using touch

Our maps are easy to use with a mouse and keyboard on a PC, and even better to control using natural touch gestures on phones or other touch-screen devices. Pinch to zoom in or out, use two fingers to rotate and tap on a label to launch detailed information about the business or landmark. You can also scroll using two fingers to tilt the map for a different perspective. Tap on the Show my Location button to zoom to your current location, or on the Map views button to turn on Bing aerial imagery or live traffic data.

4 Maps with touch city view 5 Maps with touch aerial and traffic

Find the places you’re looking for

Maps for Windows 10 Technical Preview brings you rich local search data from Bing. Tap the Search box and you’ll immediately see a list of our most popular search categories, and with one tap, you can find coffee shops, hotels, shopping, or restaurants quickly and easily.

6 Search popular categories

Enter the name of a place you are looking for, such as the “Space Needle”, and you’ll see a rich set of details that’s consistent with our search experiences across Windows 10 Technical Preview and Bing web search. Along with the landmark’s location and contact information, you’ll see images and recent customer reviews. You can even make a reservation for a restaurant right from the app.

7 Space Needle detail card 8 Space Needle detail card scrolled to overview

When you’re on the go — let’s say you’re staying at a hotel in an unfamiliar city — you can easily find nearby places to eat, drink, and shop, or even suggested attractions and activities, right from the detail card for your hotel.

9 Edgewater Hotel Nearby Search Categories 10 Eat or Drink near Edgewater Hotel

Helping you find the places you’re looking for, especially when you’re on the go or in an unfamiliar place is one of our most important goals for Maps on the phone. We’ll continue to improve these features, and would love to hear from you to let us know how we’re doing.

Directions and guided navigation

Once you’ve found the place you’re looking for, Maps will be the best app to help get you there! Regardless of whether you’re driving, taking public transportation, or walking, the app gives you the detailed turn-by-turn directions to help you navigate with confidence.

We automatically find the best route for you based on current traffic conditions, and also give you the option to tailor the route to your preferences. For example, I like avoiding toll roads.

11 Driving Directions to Museum of Flight 12 Transit Directions to Museum of Flight

If you’re using a GPS-enabled device, you’ll also get voice guided driving directions. This rich, hands-free navigation experience is based on the guided navigation features from HERE maps, and includes the popular speed limit warnings and day and night modes to help you get to your destination as safely as possible.

Access Maps anywhere, anytime, any device

In Windows 10, you can always count on having your maps available by storing your maps offline, on both your PC and phone. From the Maps Settings page, choose Download a Map and pick the regions of the world you’d like to store offline. Once downloaded, the maps, local search results, and even voice guided navigation features will work without an internet connection and without using your data plan.

We’re also making it really easy to get to your content from one device to another. Sign in to any Windows device with your Microsoft Account, and your home and work locations, route preferences, and recent searches will automatically roam with you. Simply tag a place as a favorite and it will always be there, in your Favorites list on all your Windows 10 devices. Search for a favorite restaurant on your PC, and the search result will be there on your Windows phone when you’re ready to go.

Improvements to the Maps preview app on the PC

In addition to our first Maps preview for the phone, we’ve made some big improvements to the Maps app you saw in the January Windows 10 Technical Preview for the PC.

3D Cities

If you were a fan of the Bing Maps Preview app for Windows 8.1, you’ll be excited to know that we’ve brought the beautiful, photo-realistic, 3D images to Maps. From your PC, you can virtually travel and explore the world with breathtaking, hi-fidelity views of more than 190 cities and famous landmarks. Just choose Explore in 3D from the left pane and explore your favorite cities from the comfort of your living room.

14 Florence 3D Map view

Streetside panoramas

Virtually stroll down a street with Streetside imagery, exploring places with stunning 360° panoramas and views. Check out a hotel before taking your summer vacation, virtually tour local landmarks and attractions, and even scope out parking near a restaurant ahead of your dinner reservation. Shopping for a new house? Get a feel for the neighborhood before you schedule a viewing with a realtor.

15 Streetside view of Pike Place market

We have a lot of work left to do in the coming months, and hope you like where we are headed. Consider joining the Windows Insider Program today for early access to Windows 10 Technical Preview, and let us know what you think through the Windows Feedback app – we’d love to hear from you!

With your feedback, you can help us deliver the best maps, local search, and guided navigation experiences for millions of Windows 10 users. Thank you.

Have questions or comments about the Maps app in Windows 10? Head over to the Windows Insider Program forums.

Acer announces 5th generation Intel Core processor support and more to several of their PCs

Acer has announced they are expanding support for 5th generation Intel Core processors to several of their PCs including the Aspire R 13, Aspire S7, and Aspire Switch 12. 5th generation Intel Core processors bring improved performance, graphics and battery life over previous generation Intel Core processors. Acer is also bringing 802.11ac wireless to these PCs as well – which is up to 3 times faster than 802.11n. With 802.11ac you get increased network speed, range, and reliability.

Acer R 13 - R7-371Acer Aspire S7-393_front left facing_Win

The Aspire R 13 and Aspire Switch 12 are 2-in-1 convertible PCs. Both feature the ability to transition into 5 different modes including tablet, tent or laptop. The Aspire R 13 has a special hinge – the Acer Ezel Aero hinge – that enables the 13.3-inch display rotate 180 degrees. The Aspire Switch 12 has a 12.5-inch display with a unique kickstand design that features a magnetic and latch-less detachable keyboard. And the Aspire S7 Ultrabook is built super thin with an all-aluminum unibody design. The Aspire R 13 and Aspire S7 featuring 5th Generation Intel Core processors and 802.11ac wireless will be available worldwide in January. The Aspire Switch 12 featuring the new Intel Core M processor will be available in the United States and Canada in early 2015. Exact specifications, prices and availability will vary by region.

Acer_Aspire_V_Nitro_Black_Edition

Acer has also announced they are bringing the Intel RealSense 3D camera to its powerful line of V Nitro Black Edition series PCs. The Intel RealSense 3D camera can understand and respond to natural movement in 3 dimensions – allowing people to interact with games, open web pages or navigate apps without needing to touch the PC’s keyboard or trackpad. People can also use the Intel RealSense 3D camera for 3D scanning and printing. The V 17 Nitro Black Edition has a 17.3-inch Full HD display and comes powered by a 4th generation Intel Core Processor, NVIDIA GeForce GTX 860M graphics, and up to 16GB of memory. You also get the option of either a 128GB or 256GB SSD. And it comes with 802.11ac wireless. The V 17 Nitro Black Edition is perfect for any professional or consumer power user (and PC gamer!). Acer V Nitro Black Edition series PCs featuring the Intel RealSense 3D camera will be available worldwide in January with exact specs, pricing, and availability varying by region.