Category Archives: Making mixed reality

Auto Added by WPeMatico

Making mixed reality: Meet the mind behind interactive film Free the Night

Created by Nicole McDonald and JauntVR, the full immersive app experience Free the Night is now available on Windows Mixed Reality. Read on to learn how its award-winning director made her childhood dream into a reality.
Recently, I had the pleasure of meeting Nicole McDonald, creator of Windows Mixed Reality immersive app experience Free the Night. But while some might bill the app as an interactive film, it would be an understatement to call Nicole simply a filmmaker.
In the 15 years since she began creating in multimedia, Nicole has worn many hats – from that of game designer, to NASA collaborator, to marketing campaign manager for the world’s biggest names, including American Idol, Cirque Du Soleil, and Toyota.
Now, Nicole is channeling her wealth of creative energy towards a new endeavor: the art of storytelling in mixed reality. The latest in a list of interactive films she’s created – many of which have been featured in festivals such as Cannes, Sundance, and SXSW – Free the Night is a collaboration between Nicole and cinematic VR producer Jaunt Studios. Designed exclusively for Microsoft’s Windows Mixed Reality immersive headsets, the experience enables audiences to place stars back into the sky and watch them glitter and swirl as they reclaim the night.

In celebration of Free the Night’s full release last month, I sat down with Nicole to learn more about the initial spark behind her creation.
Nicole McDonald, director and creator of interactive film Free the Night.
What inspired you to create Free the Night?
Nicole: In general, I am deeply inspired by my memories of being a child and childhood.
Free the Night was inspired by the most vivid of these: staring up into the night sky with my family. My father loved the sky and his enthusiasm and wonder of it pulled us all in. He would point out the constellations, and we’d discuss what could be out “there.” We grew up in a teeny tiny town and the sky was magnificent: there were thousands of stars, hints of the milky way, and we once saw a little bit of the aurora.
But sadly, one year, a huge mall was constructed nearby and it washed out almost everything. The amber glow of the parking lot made it impossible to view from our front lawn. It was heartbreaking. I didn’t understand how, or why, someone could do that.
Fast forward to a few years ago, when I was watching a fireworks display and thought how amazing it would be to have the same sky that my father introduced to me as the backdrop. I wanted to “blow” out all the lights out around me and… voilà, Free the Night was born.

You come from a background in advertising, filmmaking, and gaming. What led you to mixed reality as the medium for this story?
Nicole: I’ve always been interested in the marriage of narrative and technology, in understanding how innovative tools can enhance traditional storytelling. Mixed reality, to me, is placing the audience physically in the narrative, where they can participate and be moved emotionally by a story. It’s such a wonderful time of exploration, and we now have an audience that is curious, if not craving, these new media and experiences.
The setting of Free the Night, inspired by the creator’s hometown.
What can audiences expect from Free the Night when they put on their headsets?
Nicole: In Free the Night, we become giants in a mountainous landscape, tasked with liberating the stars into the night sky. We need to be able to get low enough to the ground to extinguish the manmade lights of the city and reach high enough to place the stars back into the sky. This requires us to interact with the entire 360-degree virtual space, a range of freedom only afforded in mixed reality. With Windows Mixed Reality immersive headsets, our audience has the seamless tracking and full “six degrees of freedom” to actively engage in the narrative and explore all around them.
The story begins with the silhouette of a girl releasing lights back into the night sky.
You mentioned that Free the Night encourages us to use our full 360-degree space with “six degrees of freedom” (6DOF). Can you tell me more about 6DOF for those of us who may be unfamiliar with it, and why you chose to incorporate it in your design?
Nicole: 6DOF is the freedom of movement in 3D space. Six axes of movement allow us interact with objects that are as low as the ground and as high as we can lift our hands – left and right, back and forth… all around us.
With immersive mixed reality, there’s something so delightful about playing with scale and exploring a narrative in 3D space. It challenges us to have new perspective and see and play in ways we haven’t before.
Using motion controllers, audiences can swirl and play with the embers, freeing them into the night sky.
Free the Night has been tagged as both “cinematic” and “interactive” – two qualities that, in conjunction, set the experience apart from your typical film or game. What inspired you to experiment with mixing the two media?
Nicole: Honestly, it started when I was nine years old, in a basic computer coding class. We all worked on monochromatic monitors – the instructor explained that computers would someday show us more colors. Of course, he was describing monitors that would have RGB color profiles, but, at the time, I naively thought he meant that computers would allow us to see more colors than are in our current rainbow. My mind went wild. I made up stories of who and what lived in these unseen colors… What could they do that we couldn’t?
Ever since, I’ve been captivated by using technology as a creative tool. I always ask myself if my concepts allow my audience to see more “colors,” more worlds that we’d never see without the innovations of today. I love exploring how we might profit from experiencing and interacting with these kinds of stories and, most importantly, how can we add joy and wonder to our audience’s lives.
Once freed, the stars sparkle and burst in impressive displays.
Did you have an ideal audience in mind when you decided to create this experience?
Nicole: Ideally, Free the Night is for everyone. It’s a universal story for human beings, and because it’s for all, I especially wanted it to be an invitation to those who haven’t necessarily found their space, or connection with content, in mixed reality. People sometimes think that mixed reality is just for gaming or 360-degree passive experiences, but I want my projects to be all-embracing interactive experiences – experiences in which everyone is enticed to participate in, rather than be intimidated by, the medium.

What, for you, was most intimidating about creating Free the Night?
Nicole: One of the best and hardest parts of working on interactive experiences is the ice-cream headache you get when you have to find a solution but there is no playbook available. There are so many limitations and unknowns; as creators in this nascent space, we have to be a bit like MacGyver.
For Free the Night, it was how to create the magic of extinguishing light embers in mixed reality. Traditionally, you can only have a small amount of particles on screen at any given time in VR. But we needed thousands, which would have personality and be responsive when we interacted with them. My dev team, led by long-time collaborator KC Austin, figured out how to write a system with compute buffers that allowed the experience to sing.
More generally, the biggest challenge in creating with mixed reality is disrupting our preconceived notions of the medium. People are so often intimidated by the experience they’re about to enter, or nervous that they’re going to do something wrong. We’re so conditioned to approaching this medium as a game, rather than something more. My challenge is getting people to settle in the narrative instead of trying to get the highest score.
Future of StoryTelling attendees were among the first to demo the experience in October.
Free the Night recently debuted as a demo at the annual Future of StoryTelling Summit in New York. (Congratulations!) After months of working round-the-clock, what was it like finally seeing your demo premiere?
Nicole: It was pinch-worthy. We had had our heads down working for a few months, so to see the general public instilled with the awe for which I’d hoped was a dream come true.
My favorite response was that of a peer who tried Free the Night for the first time at the Summit. As she took off the headset, her eyes welled with tears. She told me how she had been transported to her own childhood and was filled with the same wonder I’d been way back when. It’s because of reactions like this that I can’t wait for everyone to experience the full project this month.
A woman reaches for the stars at The Future of StoryTelling demo.
If there’s one idea or impression you hope audiences take away from the full version of Free the Night, what would it be?
My hope is that people feel more connected to the world when their headsets comes off – that by experiencing Free the Night, they are enticed to look up a little bit more often – and that they truly understand that we are the magic and Earth is Eden.
One of many flower-shaped constellations awaiting audiences in the experience.
We’ve talked a lot about your journey in making this childhood dream into a reality. Looking back, what advice do you have for someone aspiring to create an immersive experience as aesthetic and emotionally inspiring as Free the Night?
Nicole: Oh, my… Well, first, for those who want to create in the space, it’s extremely difficult and thus can be extremely fortifying. Before you begin, ask yourself what you want your audience to feel or take away from the experience. Try to understand how the idea will blossom in the medium; take advantage of what you can do inside mixed reality that you can’t do in traditional linear 2D displays. Don’t be afraid of limitations; you can execute the essence of ideas in many ways. Always, always storyboard, create animatics, and test and play.
For those new to MR, please, don’t be intimidated. There is no right or wrong way to create an interactive experience. Get comfortable; look around at your environment before trying to rush through it. Approach everything with the wonderment you had as a child.
Free the Night promises 360 degrees of breathtaking scenes like the one above.
One last question, Nicole, before I let you go… Now that Free the Night is officially on Windows Mixed Reality, what’s next?
Nicole: Surfing and yoga… haha. But really, I’ll be working on the full experience for HUE, an interactive film about a man named Hue who has lost his ability to see color. In this touch-based tale, he is reactive to our presence and touch like a living breathing being. We help Hue find his “full spectrum” by aiding him to see the everyday joy around him and his own potential to be wonderful.
Concept art for Nicole’s next interactive film, “HUE.”
Thanks so much for taking the time to share your story, Nicole, and congratulations, again, on creating such a breathtaking experience. I can’t wait to try “Free the Night” again this week!
Free the Night is available from the Microsoft Store. Download it for free on your PC and plug in your headset to experience the magic of Windows Mixed Reality… and stay tuned for our next installment of Making mixed reality.

Making mixed reality: a conversation with Alexandros Sigaras and Sophia Roshal

Dr. Olivier Elemento (left) alongside with his Ph.D students Neil Madhukar and Katie Gayvert, analyze medical network data (photo courtesy of the Englander Institute for Precision Medicine)
Welcome! This is Making mixed reality, a series celebrating the passionate community creating apps and experiences with Windows Mixed Reality. Here, developers, designers, artists (and more!) share how and why they got started, as well as their latest tips. We hope this series inspires you to join the community and get building!
Meeting Alexandros Sigaras and Sophia Roshal was a lot like mixed reality: a digital-physical fusion. It first happened through a flurry of tweets and emails as Alexandros, a senior research associate at Weill Cornell Medicine (WCM), and Sophia, a WCM software engineer, rapidly prototyped a Microsoft HoloLens application to achieve the Englander Institute for Precision Medicine’s “cancer moonshot,” a promise to empower better and faster cancer research, data collaboration, and accessible care. Soon after I was lucky enough to demo their project in-person. It’s now in the Windows Store as Holo Graph, an app enabling researchers to bring their own network data into the real world to explore, manipulate, and collaborate with other researchers in real-time, be they in the same room or on the other side of the planet.
Find out what makes this team tick, and how they make big data approachable with Windows Mixed Reality.
Sophia Roshal looks at a graph of medical data (photo courtesy of the Englander Institute for Precision Medicine)
Why HoloLens, and why Windows Mixed Reality?
Sophia: It’s the logical next step. You usually constantly switch from window to window [on a PC]. With HoloLens, you stay in one place. You can just point at something; you don’t have to use your mouse. It’s just so much more of a natural environment, which is great.
The best part of mixed reality for me is seeing other people try it for the first time. They are surprised how well interactions between the real world and holograms work, and are excited to see new updates. The most exciting part is to see the endless possibilities of mixed reality. From games to medical research, there are still many applications of mixed reality to explore.
Alexandros: One of the key questions we get every single time we show HoloLens to someone who is already an avid developer is, “Why HoloLens, and not a 2D screen? Why does this revolutionize our work?” The key answer behind this is simplicity, connecting these dots. The amount of high-quality data that you can parse through with holograms is significantly more than the amount of data that you could create in a table and fuse together in your brain! Tangibility and collaboration are the biggest improvements. It’s like saying the mouse and the keyboard are absolutely great, but phone touch screens are a better user interface. We treat HoloLens as a technology that allows us to go to a higher level, make things more tangible, and remove the challenges of making connections in your brain because you actually see and manipulate them.

Using @Hololens for realtime collaborative & interactive visualization on metabolomic networks @RoshalSophia @ElementoLab @ksuhre pic.twitter.com/JP60UH8Wrs
— Alex Sigaras (@AlexSigaras) August 14, 2017

Who uses Holo Graph?
Alexandros: In a nutshell, the end users for Holo Graph are computational biologists, clinicians, and oncologists. Instead of looking at “big data” in a two-dimensional structure, they immerse themselves and explore and focus on their areas of interest in 3D. There are two scenarios that we currently use with Holo Graph. One is for cancer research and genomics, and the other is metabolomics.
For cancer research, it’s for drug discovery. We want to find how specific drugs relate to specific genes. With our app, I can upload my network that has all of this correlating information, and I can explore it, manipulating and changing the ways I look at data. If I click on a hologram of a drug, I’ll see the drug’s most up-to-date information directly from the Food and Drug Administration (FDA) – that’s an API tool. If I click on the gene, gene cards will tell me more about that specific gene.
The other use case that we’re doing with our colleague Dr. Karsten Suhre from Weill Cornell Medicine in Qatar is metabolomics. Dr. Karsten Suhre has identified connections between metabolites, genes, and diseases such as Crohn’s disease and diabetes. Using Holo Graph he can browse and identify unexpected paths in the network. One of the latest videos that we shared was collaborative sharing and manipulation of this very network on Crohn’s disease to identify if there are unexpected connections to other diseases.
The Holo Graph team comes from the Englander Institute for Precision Medicine at Weill Cornell Medicine and the  Weill Cornell Medicine Qatar. From left, Sophia Roshal, Dr. Karsten Suhre, Dr. Olivier Elemento, Dr. Andrea Sboner and Alexandros Sigaras. (Photo courtesy of the Englander Institute for Precision Medicine)
When did you get started building and designing for Windows Mixed Reality? Any tips for others just beginning? 
Alexandros: We became interested on the platform about two years ago and were delighted to be included in the first wave of HoloLens devices that shipped. Whether on an online forum or a meetup, there are a lot of talented people happy to help you get there and share their experience. Don’t try to reinvent the wheel. You will be surprised how many questions have already been answered before! As far as tips go, download and try out apps from the Windows Store and make sure to reach out to the community for any questions.
Sophia: Fragments was the biggest inspiration for me because the virtual characters sit on actual chairs. We recently updated our app to include avatars and do the same thing, and it was really cool to see that! Our avatars follow the person’s movement, and we also use spatial mapping to find the floor because the only point of reference on HoloLens is the head. There are MixedRealityToolkit scripts that finds planes, where you can find the lowest one and that will be the floor. Then you calculate the height between the head and the floor and you can map the avatar from there.
Alexandros: Doing tutorials Holograms 240 with the avatars and Mixed Reality 250 with sharing across devices are excellent examples to see the capabilities here.
Sophia: For someone just beginning, Mixed Reality Academy is by far the easiest way to start building apps. MixedRealityToolkit is the main tool I use. I will write my own scripts most of the time, but if you’re just starting, you have to get it!
What inspires you?
Sophia: Impact on our patients’ lives. Seeing your code making a positive impact in someone else’s life battling with cancer is one of the most rewarding experiences ever. The Englander Institute for Precision Medicine is using cutting-edge technology to go through a sea of data and provide our care team and their patients better treatment options. We believe that AI and devices such as HoloLens are just beginning to show their true potential, and we look forward on what’s yet to come in the near future.
Alexandros: And it’s all about the power of people. The headset doesn’t save someone’s life; the clinician does. But mixed reality helps them see the patterns and get there. With HoloLens we want to answer, “If I were to show you this before you made the call, would that change anything? Would it make your decision and response faster? Would it give you more data?” And every person that we ask nods their head and says, “Yes, it’s right there. It’s almost like I can touch it.” Clinicians are used to reviewing genomic reports that can span up to hundreds of pages requiring significant time and effort. This doesn’t have to be the case though. HoloLens can act as a catalyst by significantly reducing the review time and make an impact at scale when combined with other tools such as AI, machine learning, and deep learning that we also do at the Institute.
Holo Graph can help researchers identify patterns in networks (photo courtesy of The Englander Institute for Precision Medicine)
How are you getting data into your application? Any tips for those who want to do data visualization with HoloLens?
Sophia: The easiest way to load dynamic data into an application is through a cloud integration app such as OneDrive or Dropbox. When you share data across the network to other users you need to consider secure transfer and adopt standard formats. Holo Graph currently supports .csv and XML/GraphML formats on OneDrive. We tend to share data across using JSON.
OneDrive loading has been a great surprise. We used to add files to the backend. With OneDrive, now anyone who wants to can load their data into the experience.
Alexandros: Data and data privacy are of utmost importance to the Institute. The real value of Holo Graph is not just about its looks; it’s about empowering researchers to get their real data in securely. As far as visualizing the data, my tip is to enable your users to break out of the 2D window and put their data on their environment on their terms.
I’m ending with a favorite quote from our conversation. Spoiler alert: mixed reality’s got game!
Alexandros: The way I explain mixed reality is this: Imagine you have a virtual basketball. If you’re throwing it onto the real ground, it bounces off because it knows where the ground is, and there’s “friction.” You can repeat this 100 times and it would happen the same way – you can expect it. It’s literally bringing digital content into real life, allowing you to bend the rules.
Sophia and Alexandros are seriously inspiring. You can connect with Sophia and Alex on Twitter @RoshalSophia and @AlexSigaras.
Want to get started #MakingMR? You can always find code examples, design guides, documentation, and more at Windows Mixed Reality Dev Center. Want more? Check out mixed reality design insights on Microsoft Design Medium. Inspiration abounds!

Making mixed reality: a conversation with Lucas Rizzotto

I first met Lucas Rizzotto at a Microsoft HoloLens hackathon last December, where he and his team built a holographic advertising solution. Fast forward to August, and he’s now an award-winning mixed reality creator, technologist, and designer with two HoloLens apps in Windows Store: MyLab, a chemistry education app, and CyberSnake, a game that makes the most of spatial sound…and holographic hamburgers. Little did I know, Lucas had no idea how to code when he started. Today, he shares how he and you can learn and design mixed reality, as well as some tips for spatial sound. Dig in!

Why HoloLens, and why Windows Mixed Reality?
It’s the future! Having the opportunity to work with such an influential industry on its early days is a delightful process – not only it’s incredibly creatively challenging, you can really have a say on what digital experiences and computers will look like in 10, 20 years from now – so it’s packed with excitement, but also responsibility. We are designing the primary way most people will experience the world in the future, and the HoloLens is the closest thing we’ve got to that today.
The community of creators around this technology right now is also great – everyone involved in this space is in love with the possibilities and wants to bring their own visions of the future to light. Few things beat working with people whose primary fuel is passion.
How did you get started developing for mixed reality?
I come from mostly a design background and didn’t really know how to code until two years ago – so I started by teaching myself C# and Unity to build the foundation I’d need to make the things I really wanted to make. Having the development knowledge today really helps me understand my creations at a much deeper level, but the best part about it is how it gives me the ability to test crazy ideas really quickly and independently – which is extremely useful in a fast-paced industry like MR.
HoloLens wise, the HoloLens Slack community is a great place to be – it’s very active and full of people that’ll be more than happy to point you in the right direction, and most people involved in MR are part of the channel. Other than that, the HoloLens forums are also a good resource, especially if you want to ask questions directly to the Microsoft engineering team. Also, YouTube! It has always been my go-to for self-education. It’s how I learned Unity and how I learned a ton of the things I know about the world today. The community of teachers and learners there never ceases to amaze me.
Speaking of design, how do you design in mixed reality? Is anything different?
MR is a different beast that no one has figured out quite yet – but one of the key things I learned is that you need to give up a little bit of control in your UX process and design applications more open ended. We’re working with human senses now, and people’s preferences vary wildly from human to human. We can’t micro-manage every single aspect of the UX like we do on mobile – some users will prefer to use voice commands, others will prefer hand gestures – some users get visually overwhelmed quickly, while others thrive in the chaos. Creating experiences that can suit all borders of the spectrum is increasingly essential in the immersive space.
3D user interfaces are also a new challenge and quite a big deal in MR. Most of the UI we see in immersive experiences today (mine included!) is still full of buttons, windows, tabs and reminiscent visual metaphors from our 2D era. Cracking out new 3D metaphors that are visually engaging and more emotionally meaningful is a big part of the design process.
Also, experiment. A lot. Code up interactions that sound silly, and see what they feel like once you perform them. I try to do that even if I’m doing a serious enterprise application. Not only this is a great way to find and create wonder in everything you build, it will usually give you a bunch of new creative and design insights that you would never be able to stumble upon otherwise.
An example – recently I was building a prototype for a spiritual sequel to CyberSnake in which the player is a Cybernetic Rhinoceros, and had to decide what the main menu looked like. The traditional way to set it up would be to have a bunch of floating buttons in front of you that you can air tap to select what you want to do – but that’s a bit arbitrary, and you’re a Rhino! You don’t have fingers to air tap. So instead of pressing buttons from a distance, I made it so players are prompted to bash their head against the menu options and break it into a thousand pieces instead.
This interaction fulfills a number of roles: first of all, it’s fun, and people always smile in surprise the first time they destroy the menu it. Secondly, it introduces them to a main gameplay element (in the game players must destroy a number of structures with their head), which serves as practice. Thirdly, it’s in character! It plays into the story the app is trying to tell, and the player immediately becomes aware of what they are from that moment forward and what their goal is. With one silly idea, we went from having a bland main menu to something new that’s true to the experience and highly emotionally engaging.
HoloLens offers uniquely human inputs like gaze, gesture, and voice. So different from the clicks and taps we know today! Do you have a favorite HoloLens input?
Gazing is highly underestimated and underused – it implies user intention there’s so much you can do with it.  A healthy combination of voice, hand gestures, and gaze can make experiences incredibly smooth with contextual menus that pop in and out whenever the user stares at something meaningful. This will be even truer once eye-tracking becomes the standard in the space.
What do you want to see more of, design wise?
I want to be more surprised by the things MR experiences make me do and feel challenged by them! Most of the stuff being done today is still fairly safe – people seem to be more focused on trying to find ways to make the medium monetizable instead of discovering its true potential first. I live for being surprised, and want to see concepts and interactions that have never crossed my mind and perfectly leverage the device’s strengths in new creative ways.
Describe your process for building an app with Windows Mixed Reality.
I try to have as many playful ideas as I possibly can on a daily basis, and whenever I stumble upon something that seems feasible in the present, I think about it more carefully. I write down the specifics of the concept with excruciating detail so it can go from an abstraction into an actual, buildable product, then set the goals and challenges I’ll have to overcome to make it happen – giving myself a few reality checks on the way to make sure I’m not overestimating my abilities to finish it in the desired time span.
I then proceed to build a basic version of the product – just the essential features and the most basic functionality – here I usually get a sense if the idea works or not at a most basic level and if it’s something I’d like to continue doing. If it seems promising, then the wild experimentation phase begins. I test out new features, approach the same problem from a variety of angles, try to seize any opportunities for wonder and make sure that I know the “Why?” behind every single design decision. Keep doing this until you have a solid build to test with others, but without spending too much time on this phase, otherwise projects never get done.
In user testing, you can get a very clear view of what you have to improve, and I pay close attention to the emotional reactions of users. Whenever you see a positive reaction, write it down and see if you can intensify it even further in development. If users show negative emotional reactions, find out what’s wrong and fix it. If they’re neutral through and through, then reevaluate certain visual aspects of your app to find out how you can put a positive emotion on their face. Reiterate, polish, finish – and make a release video of it so the whole world can see it. Not everyone has access to an immersive device yet, but most people sure do have access to the internet.

CyberSnake’s audio makes players hyper-aware of where they are in the game. Can you talk about how you approached sound design? After all, spatial sound is part of what makes holograms so convincing.
Sound is as fundamental to the identity of your MR experience as anything else, and this is a relatively new idea in software development (aside from games). Developers tend not to pay too much attention to sound because it has been, for the most part, ignored in the design process of websites and mobile applications. But now we’re dealing with sensory computing and sound needs to be considered as highly as visuals for a great experience.
CyberSnake uses spatial audio in a number of useful ways – whenever user’s heads get close to their tail, for example, the tail emits an electric buzz that gets louder and louder, signaling the danger and where it’s coming from. Whenever you’re close to a burger, directional audio also reinforces the location of the collectibles and where the user should be moving their head. These bits of audio help the user move and give them a new level of spatial awareness.
Sound is an amazing way to reinforce behaviour – a general rule of thumb is to always have a sound to react to anything the user does, and make sure that the “personality” of said sound also matches the action that the user is performing thematically. If you’re approaching sound correctly, the way something looks and moves will be inseparable from the way it sounds. In the case of CyberSnake, there was some good effort to make sure that the sounds fit the visual, the music and the general aesthetic – I think it paid off!
Spending some time designing your own sounds sounds like a lot of work, but it really isn’t. Grab a midi-controller, some virtual instruments and dabble away until you find something that seems to fit the core of what you’re building. Like anything else, it all comes down to experimentation.
What’s next for you?
A number of things! I’m starting my own Mixed Reality Agency in September to continue developing MR projects that are both wondrous and useful at a larger scale. I’m also finishing my Computer Science degree this year and completing a number of immersive art side projects that you’ll soon hear about – some of which you may see at a couple of major film festivals. So stay in touch – good things are coming!
As always, I’m impressed and inspired by Lucas’s work. You can connect with Lucas on Twitter @_LucasRizzotto and his website, where you’ll find nuggets of gold like his vision for mixed reality and AI in education. And maybe even his awesome piano skills.
Learn more about building for Windows Mixed Reality at the Windows Mixed Reality Developer Center.
Lucas is right about spatial sound—it adds so much to an experience—so I asked Joe Kelly, Microsoft Audio Director working on HoloLens, for the best spatial sound how-tos. He suggests using the wealth of resources on Windows Mixed Reality Developer Center. They’re linked below—peruse and use, and share what you make with #MakingMR!
Spatial sound overview
Designing/implementing sounds
Unity implementation
Programming example video (AudioGraph)
GitHub example (XAudio2)