Category Archives: Tablet

Auto Added by WPeMatico

New Tools in Windows Device Portal for the Windows 10 Fall Creators Update

In the Windows 10 Fall Creators Update, Device Portal now offers several new tools from across Windows to help you location test your UWP, explore Mixed Reality, build new hardware peripherals and test your apps new installation pipeline. It’s a little bit of goodness for everyone, and we’re excited to share these with you.
If you’re not familiar with Device Portal, you can check out the blog posts below to see what other tools you can find in Device Portal, or look at the new to learn how to enable it.
And as always, all of these tools are backed by a REST API, so that you can use it from a scripting or client application environment using the Device Portal Wrapper.
Location Based Testing
Most of us don’t have the travel budgets to test our apps across the world – but pretending to travel is almost as good!  The Location tool in Device Portal lets you easily change the location that Windows reports to apps. By tapping the “Override” check box, you can swap out the device location for whatever you set using the map or lat/long text boxes. Be sure to uncheck the box when you’re done so that your location (and timezone) come back to reality – every vacation must end…

Figure 1: The News app keeping me up to date with local headlines!
This also works for web pages in Microsoft Edge, letting you test your webpages in different parts of the world.
Some notes on what this tool can and cannot do:
This doesn’t change the locale of your PC! So the News app above still saw an EN-US user in the middle of Italy.
You may not see all apps using this location. Some programs don’t use the Windows API to determine location or have special logic (e.g. using your IP address) to determine your location.
This tool marks the PositionSource of the location data as “Default.” Some apps may check for the source and alter their behavior based on it.
Happy travels!
USB Diagnostics
This one goes out to all the hardware folks – if “HLK” or “WDK” sound familiar, you might find this handy. The USB team has updated the USBView tool to work inside Device Portal, so developers working on new hardware can have more tooling at their fingertips.
The USB Devices tool can be a bit tricky to find – head to the hamburger menu in the top right, and go to “Add tools to workspace.”  Scroll to the bottom and check the “USB Devices” box, then hit “Add.” And voila – a full view of your systems USB hubs, controllers and peripherals. The hubs and controllers expand to show individual devices using the + (plus) sign, and clicking the gear will expand to show the items properties.

Streaming App Install Debugging
The Windows 10 Creators Update added ““streaming installation” for UWP, which allows a user to launch the app before it finished downloading. In order to make this easy to test, the App Model team has added a Streaming Install Debugger tool to Device Portal. To use it, deploy an app with content groups to the device, then open the Streaming Install Debugger. In it you’ll be able to edit the states of the content groups so you can test your apps behavior as streaming install is being simulated and ensure it behaves correctly when content groups are missing.

For more details, check out Andy Liu’s blog posts about the new App Installer and Streaming Install Debugger tools.
Mixed Reality Tooling
One of the bigger splashes in the Fall Creators Update is the addition of Mixed Reality to Windows Desktop. As part of that release, we’re including a suite of tools to help developers build great Mixed Reality apps. Two of these tools may look familiar to HoloLens developers – 3D View and a Framerate counter. There’s also a new app launch option that appears when you have an immersive headset attached to your PC, which lets you launch your app in Mixed Reality.
Frame rate is an important factor in making mixed reality apps comfortable, and it’s important for developers to optimize performance to hit full frame rate on the systems they support. The Frame Rate tool in the Device Portal helps by showing developers both the frame rate of their app and of the system’s compositor.

The 3D View helps when testing your immersive headset’s interactions with the real world, displaying its position as it moves through space.

Finally, what good is tooling if you can’t actually run your app in your immersive headset? Now, when you have an immersive headset attached, the Installed Apps tool will add a button letting you launch the app in the HMD. While fully immersive apps will always run in Mixed Reality, this new button is particularly useful for 2D UWP apps (or apps that switch between 2D and immersive) when you want to test them in Mixed Reality.

As always, if you have ideas for Device Portal that would help you write or debug apps, please leave us a note on our UserVoice or upvote an existing request. If you run into bugs, please file it with us via the Feedback Hub.
Related Posts:
Using Device Portal to view debug logs for UWP
Using the App File Explorer to see your app data

Using your ad units correctly when you have multiple store apps

As we had blogged earlier, ad unit performance has a direct correlation with the application category and the users targeted by the application. Having an ad unit associated with multiple store applications leads to ambiguity, which can result in improper ad delivery. This will have an adverse impact on your revenue and user experience. The Children’s Online Privacy Protection Act (COPPA) and other compliance requirements mandate that there is a 1:1 correlation between a store application and an ad unit.
Each ad unit must be only be used in a single store application. This requirement includes applications that target Universal Windows Platform applications, along with Windows 8.x (WinRT) applications.
We’ve reached out to developers through various means, including multiple notifications inside Dev Center. The ad delivery will soon stop on ad units used across multiple applications, so if you have any such ad units, please update! Setting up new ad units is extremely easy.
Don’t forget these following tips that can help maximize your in-app-ad revenue:
Move to the latest advertising SDKs
Set COPPA settings for your app
Use only IAB standard ad sizes
Set your ad placement appropriately
Use Interstitial Banner as fallback to Interstitial Video

Xbox Live Creators Program Is Now Live!

Back in March, we revealed the Xbox Live Creators Program. Today, we’re excited to announce that any developer can now directly publish their games to Xbox One and Windows 10. We’ve already had some great games published during the preview program (check out the list below!), but there’s always space for more, and it’s time for your game to shine. Microsoft is committed to ensuring that any developer who wants to publish their game on Windows 10 PCs and the Xbox One console family can do so, and the Creators Program enables creators big and small, from around the world, to do just that.
What’s the Creators Program, you ask? Xbox Live Creators Program allows any developer to directly publish their games – any of their games – to Xbox One consoles and Windows 10 PCs with a standard certification process already in place for any other app or game in the Universal Windows Platform ecosystem. In other words, if you have a Dev Center account, then you’re ready to publish your game to Xbox One and Windows 10 PCs.
But it gets better! Using the Creators Program also allows you to implement a number of Xbox Live services directly in your game. Stuff like Gamertag Presence, Xbox Live leaderboards and Connected Storage. Things that make your life as a game creator easier, but also enhance your gamers’ experiences. And you also get to take advantage of killer features like Game Hubs and Clubs, Mixer streaming (and integration for more interactive experiences) and some really awesome accessibility features to make sure your game is available for an even wider audience.
And because you get to use the standard Windows Store certification process, you can have the freedom to publish when you’re ready, set pricing the way you like and establish sales and updates that fit your schedule.
Any Creators Program game published on the Windows 10 Store will be listed in the Games category, it’s that simple. On the Xbox One console, we’ve created a special section of the Store called Creators Collection, so that your game can be easily discovered by people looking for something new. We also did this because we know from feedback from players, parents and developers, that the current curated experience on the Xbox One Store is something they love. So, having the Creators collection gives all of us the best of both worlds: A curated store and a fully open marketplace in the Creators Collection.
Does the Creators Program sound good to you? It does to us! And it’s so easy to do. First step is to build your game utilizing UWP and Xbox Live SDK, and for that you can use the tools you’re already using – Visual Studio, game engines like Unity, Construct 2, MonoGame and Xenko – and combine them with a retail Xbox One console and your Dev Center account. You’ll need to grab the free Dev Mode Activation app from the Xbox Store, but then you’re just a few button presses away from converting that retail machine into something ready for your development efforts.
The Dev Center account is the standard one for anyone building apps or games in the Microsoft ecosystem. If you don’t have one yet, it costs as little as $20 as a one-time fee. Then get started on your Xbox Live integration by checking out the Creators Program page and the Xbox Live Creators Program step by step guide.
Creators Program games have access to a large set of Xbox Live services, but not all of them. You’ll be able to implement features such as sign-in and presence, use of your Gamertag, leaderboards, access to your Activity Feed, Game Hubs, Clubs, Party Chat, Game DVR and broadcasting on Mixer.
However, since Creators Program is an open program as opposed to a managed one, some services are not available to you: Achievements, Gamerscore or internet multiplayer. The good news is that if you want access to these features, we encourage you to apply to the  ID@Xbox program where you’ll get the ability to incorporate these. And of course, there’s a path for games to move from the Creators Program to ID@Xbox during development (or even after they reach the Store) if a developer decides they want to add Gamerscore, Achievements or internet multiplayer later on.
While ID@Xbox was designed for professional game developers who wish to use the full set of Xbox Live features through a full certification process, the Creators Program gives all the other developers a “right-sized” set of Xbox Live services. So whether they’re small studios, hobbyists, makers, teachers and students, or if they’re just learning the ropes – the Creators Program is a simplified way to create and ship games to the Xbox community.
We know that the below set of titles is just the beginning. We’re going to highlight more of the diverse array of Creators Program games that catch our eyes on the Xbox Wire. I hope to see your game listed there one day soon.
Here’s a quick look at the first titles that will be available via the program:
Animal Rivals, Blue Sunset Games: Animal Rivals is an action-packed couch party game for one to four players. Drop into the game and fight for the Animalonia’s throne as one of the furry contenders in different mini-games and locations. The game itself presents a unique art style mixing the cartoonish looks and satire approach. (Xbox One, Windows 10)
Block Dropper, Tresiris Games: Block Dropper is a fast paced, arcade style, 3D platformer. Try not to fall as you guide your character through the challenging single player mode or grab a friend to battle head to head in a local multiplayer Block Battle Arena. Tresiris is a small game studio based in Olathe, Kansas, who create fun and simple games with quality as their top priority. (Xbox One, Windows 10)
Crystal Brawl, Studio Mercato: Gauntlet meets NBA Jam in Crystal Brawl, a 2v2 capture-the-flag local multiplayer game that melds fast action with MOBA-like strategy. Choose from a variety of characters with different abilities, with a notable twist: each character has a unique ability that alters the terrain. Experiment with different character combinations to uncover hidden strategies! Studio Mercato is an independent game studio based in New York City. (Xbox One, Windows 10)
Derelict Fleet, Bionic Pony: Derelict Fleet is a fast-paced space combat game. You are tasked with defending a refugee fleet as you travel the stars searching for a new colony to call home. Bionic Pony is a small indie studio based in Tampa, FL that started making Xbox Live indie games in 2010. (Xbox One)
ERMO, Nonostante: ERMO is a relaxing puzzle game featured with a calming and peaceful graphics. Immerse yourself in the landscapes and colors of ERMO and let you be carried away. You will learn the rules in a few seconds, but ERMO will catch you for hours. (Xbox One)
GalactiMAX!, ONLYUSEmeFEET: In the vast darkness of space, GalactiMAX has the player shooting aliens for points to pierce the heavens in classic arcade shooter action! As more aliens are defeated, the player’s ship will increase in size and power. How big can this ship get?! (Xbox One, Windows 10)
kubic, Pixel Envision Ltd: kubic is a relaxing optical illusion puzzle game based on M.C. Escher’s art, impossible objects and other geometric designs. The object is to construct the goal configuration from a number of pieces. (Xbox One, Windows 10)
Space Cat!, GershGamesLLC: Shoot your way past an onslaught of enemies and bosses. Collect weapon upgrades like missiles, bombs, laser beams and much more. GershGamesLLC is a group of young hobbyists that makes for fun on the weekends. (Xbox One, Windows 10)
Stereo Aereo, The Stonebot Studio: Stereo Aereo is an action rhythm game that is inspired by the pop-culture influences of the 80’s. You, the player, have to make sure that the mediocre space rockband Stereo Aereo, gets to their life changing concert, on time, in this comic styled sci-fi game. (Xbox One, Windows 10)
Finally, to celebrate the availability of Creators Program becoming open for any developer, we’re also highlighting the Dream.Build.Play contest, which has an Xbox One category for any game developer who incorporates Creators Program features into their game. So not only can you get your game on the console for the first time, you have a shot at winning some cash money while you do it. Sounds good to us!

ES Modules in Node Today!

Editor’s Note: Today’s post is a guest post from John-David Dalton, a Program Manager on the Microsoft Edge team and creator of the popular Lodash JavaScript library, sharing the news of a new community project to bring ECMAScript modules to Node.
I’m excited to announce the release of @std/esm (standard/esm), an opt-in, spec-compliant, ECMAScript (ES) module loader that enables a smooth transition between Node and ES module formats with near built-in performance! This fast, small, zero dependency package is all you need to enable ES modules in Node 4+ today!
@std/esm used in the Node REPL
A tale of two module formats
With ESM landing in browsers, attention is turning to Node’s future ESM support. Unlike browsers, which have an out-of-band parse goal signal and no prior module format, support for ESM in Node is a bit more…prickly. Node’s legacy module format, a CommonJS (CJS) variant, is a big reason for Node’s popularity, but CJS also complicates Node’s future ESM support. As a refresher, let’s look at an example of both module syntaxes.
const a = require("./a")
module.exports = { a, b: 2 }
import a from "./a"
export default { a, b: 2 }
Note: For more in-depth comparisons see Nicolás Bevacqua’s excellent post.
Because CJS is not compatible with ESM, a distinction must be made. After much discussion, Node has settled on using the “.mjs” (modular JavaScript) file extension to signal the “module” parse goal. Node has a history of processing resources by file extension. For example, if you require a .jsonfile, Node will happily load and JSON.parse the result.
ESM support is slated to land, unflagged, in Node v10 around April 2018. This puts developers, esp. package authors, in a tough spot. They could choose to:
Go all in, shipping only ESM, and alienate users of older Node versions
Wait until Jan 1, 2020, the day after Node 8 support ends, to go all in
Ship both transpiled CJS and ESM sources, inflating package size and shouldering the responsibility for ensuring 1:1 behavior
None of those choices seem super appealing. The ecosystem needs something that meets it where it is to span the CJS to ESM gap.

The strength of Node.js has always been in the community and user-land packages.
— Sindre Sorhus (@sindresorhus) May 9, 2017

Bridge building
Enter the @std/esm loader, a user-land package designed to bridge the module gap. Since Node now supports most ES2015 features, @std/esm is free to focus solely on enabling ESM.
The loader stays out of your way and tries to be a good neighbor by:
Not polluting stack traces
Working with your existing tools like Babel and webpack.
Playing well with other loaders like babel-register(using .babelrc “modules”:false)
Only processing files of packages that explicitly opt-in to having @std/esmas a dependency, dev dependency, or peer dependency
Supporting versioning(i.e. package “A” can depend on one version of @std/esm and package “B” on another)
Unlike existing ESM solutions which require shipping transpiled CJS, @std/esm performs minimal source transformations on demand, processing and caching files at runtime. Processing files at runtime has a number of advantages.
Only process what is used, when it’s used
The same code is executed in all Node versions
Features are configurable by module consumers(e.g. module “A” consumes module “C” with the default@std/esm config while module “B” consumes module “C” with cjs compat rules enabled)
More spec-compliance opportunities(i.e. @std/esm can enforce Node’s ESM rules for environment variables, error codes, path protocol and resolution, etc.)
Standard features
Defaults are important. The @std/esm loader strives to be as spec-compliant as possible while following Node’s planned built-in behaviors. This means, by default, ESM requires the use of the  .mjs extension.
Out of the box, @std/esm just works, no configuration necessary, and supports:
Dynamic import()
The file URI scheme
Live bindings
Loading .mjs files as ESM
Developers have strong opinions on just about everything. To accommodate, @std/esm allows unlocking extra features with the “@std/esm” package.json field. Options include:
Enabling unambiguous module support (i.e. files with at least an import, export, or “use module” pragma are treated as ESM)
 Supporting named exports of CJS modules
Top-level await in main modules
Loading gzipped modules
Before I continue, let me qualify the following section:
It’s still super early, mileage may vary, and results may be hand wavey!
Testing was done using Node 9 compiled from PR #14369, which enables built-in ESM support. I measured the time taken to load the 643 modules of lodash-es, converted to .mjs, against a baseline run loading nothing. Keep in mind the @std/esm cache is good for the lifetime of the unmodified file. Ideally, that means you’ll only have a single non-cached load in production.
Loading CJS equivs was ~0.28 milliseconds per module
Loading built-in ESM was ~0.51 milliseconds per module
First @std/esm no cache run was ~1.6 milliseconds per module
Secondary @std/esm cached runs were ~0.54 milliseconds per module
Initial results look very promising, with cached @std/esm loads achieving near built-in performance! I’m sure, with your help, parse and runtime performance will continue to improve.
Getting started
Run npm i –save @std/esm in your app or package directory.
Call require(“@std/esm”) before importing ES modules.
module.exports = require(“./main.mjs”).default
For package authors with sub modules:
// Have “foo” require only “@std/esm”. require(“foo”) // Sub modules work! const bar = require(“foo/bar”).default
Enable ESM in the Node CLI by loading @std/esm with the -r option:
node -r @std/esm file.mjs
Enable ESM in the Node REPL by loading @std/esm upon entering:
$ node
> require(“@std/esm”)
@std/esm enabled
> import path from “path”
> path.join(“hello”, “world”)
Meteor’s might
The @std/esm loader wouldn’t exist without Ben Newman, creator of the Reify compiler from which @std/esm is forked. He’s proven the loader implementation in production at Meteor, since May 2016, in tens of thousands of Meteor apps!
All green thumbs
Even though @std/esm has just been released, it’s already had a positive impact on several related projects:
Fixing Acorn’s strict mode pragma detection and aligning parser APIs
Improving dynamic import support of Babel and Acorn plugin(the dynamic import Acorn plugin is used by webpack for code splitting)
Improving the parse, load time, and spec compliance of Reify
Inspiring a fast top-level parser proof of concept
Spurred championing of export * as ns from “mod” and export default from “mod” proposals
What’s next
Like many developers, I want ES modules yesterday. I plan to use @std/esm in Lodash v5 to not only transition to ESM but also leverage features like gzip module support to greatly reduce its package size.
The @std/esm loader is available on GitHub. It’s my hope that others are as excited and as energized as I am. ES modules are here! This is just the start. What’s next is up to you. I look forward to seeing where you take it.
Final Thought
While this is not a Microsoft release, we’re proud to have a growing number of core contributors to fundamental JavaScript frameworks, libraries, and utilities at Microsoft. Contributors like Maggie Pint of Moment.js, Nolan Lawson of PouchDB, Patrick Kettner of Modernizr, Rob Eisenberg of Aurelia, Sean Larkin of webpack, and Tom Dale of Ember, to name a few, who in addition to their roles at Microsoft, are helping shape the future of JavaScript and the web at large through standards engagement and ecosystem outreach. I’m happy to share this news on the Microsoft Edge blog to share our enthusiasm with the community!
― John-David Dalton, Program Manager, Microsoft Edge

Windows 10 SDK Preview Build 16257 and Mobile Emulator Build 15235 Released

Today, we released a new Windows 10 Preview Build of the SDK and the Mobile Emulator to be used in conjunction with Windows 10 Insider Preview (Build 16257 or greater). The Preview SDK Build 16257 contains bug fixes and under development changes to the API surface area.
The Preview SDK and Mobile Emulator can be downloaded from developer section on Windows Insider.
For feedback and updates to the known issues, please see the developer forum.  For new feature requests, head over to our Windows Platform UserVoice.
Things to note:
This build works in conjunction with previously released SDKs and Visual Studio 2017.  You can install this SDK and still also continue to submit your apps that target Windows 10 Creators build or earlier to the store.
The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
Known Issues
Designer fails to render: When viewing the XAML in the Designer Window in Visual Studio, the controls fail to render.  This can be resolved by using Visual Studio 2017.3 Preview.
Compilation fails on non-Windows 10 platforms  (need to see if fixed)When building apps on previous platforms, you may get a build error:
C:program files (x86)Windows Kits10bin10.0.16232.0x86genxbf.dll:C:program files (x860Windows Kits10bin10.0.16232.0x86genxbf.dll(0,0): Error WMC0621: Cannot resolve ‘GenXbf.dll’ under path ‘C:program files (x860Windows Kits10bin10.0.16232.0x86genxbf.dll’. Please install the latest version of the Windows 10 Software Development Kit.
Process ‘msbuild.exe’ exited with code ‘1’.
This will occur if the minimum target platform version is set to 10.0.16225.0.  To work around this, right click on your project file and choose properties or open your project file in your favorite editor, and change the version to a previous released SDK.  For example:


WRL projects fail to compile with MIDLRT error: When building my WRL project that contains a WinRT Component, the project no longer compiles.  I get the following errors:
midlrt : command line error MIDL1012: [msg]argument illegal for switch / [context]ns_prefix
midlrt : command line error MIDL1000: [msg]missing source-file name
To work around this temporarily you will need to use the previous version of the MidlRT.exe tool.  You can do this by changing your changing your Target Platform Version to a currently installed previous SDK.


Breaking Changes
ecmangen.exe removal from the SDK: ecmangen.exe will no longer ship with the Windows SDK. Developers who rely on ecmangen for event manifest creation are advised to install the Windows Creator Edition of the SDK to obtain the file. Developers may also use notepad or other XML editor of choice for manifest creation. A schema file is available on MSDN to aid in manifest creation, for tools that support it.
API Updates and Additions
When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts (10 by 10) for more information.
The following are the API changes since the 16232 Preview SDK, please reference that list. The TreeView control has been removed, but will be back soon in the next release of Windows and the Preview SDK.
Addition from Preview SDK 16232

namespace Windows.Storage {
public sealed class AppDataPaths
public sealed class SystemDataPaths
public sealed class UserDataPaths

Removals from Preview SDK 16232

namespace Windows.UI.Xaml.Automation.Peers {
public class TreeViewItemAutomationPeer : ListViewItemAutomationPeer
public class TreeViewListAutomationPeer : SelectorAutomationPeer
namespace Windows.UI.Xaml.Controls {
public class TreeView : Control
public sealed class TreeViewExpandingEventArgs
public class TreeViewItem : ListViewItem
public sealed class TreeViewItemClickEventArgs
public class TreeViewList : ListView
public class TreeViewNode : DependencyObject, IBindableIterable, IBindableObservableVector, IBindableVector
public enum TreeViewSelectionMode
public sealed class XamlBooleanToVisibilityConverter : IValueConverter
public sealed class XamlIntegerToIndentationConverter : IValueConverter

Windows Subsystem for Linux on Windows Server

The Windows Subsystem for Linux (WSL) is available in Windows Insider builds of Windows Server. Now developers and application administrators can run tools they use in Linux environments alongside Cmd and PowerShell.
If you want to jump straight in, the installation guide is available here.
Why include WSL on Windows Server?
We want Windows, including Windows Server, to be a great place for developers. We know developers, system administrators, people managing services and people building services all occasionally need tools available on Linux.  Many more would like to run Linux tools as part of their workflow as a matter of convenience.
Previously, there were a few options:
Run something like Cygwin and rely on Win32 ports of common GNU tools.
Cygwin is a great toolset but it runs into issues when using tools that haven’t been ported to Windows. Many tools simply aren’t available. This is especially common when trying to build and run Ruby & Java solutions, which utilize some Linux-only Gems, libraries and components.
The tools available through Cigwin and other Win32 ports are also notorious for being out of date – which is understandable since updating them requires recompiling them for Windows. For Windows users, however, this is both inconvenient and often leads to troublesome compatibility issues when running, building or deploying software.
Use Linux in a virtual machine.
Virtual machines are designed for production workloads on Windows Server. They aren’t ideal for things closely tied to the Windows Server host. If you need basic Linux command-line tools integrated with their Windows system, a virtual machine will be cumbersome.
This is where running Linux on WSL provides value: WSL runs unmodified Linux (ELF64) binaries natively. It can install and run almost any Linux command-line tool integrated in Windows.
With the additions of WSL and Linux containers with Hyper-V isolation, Windows Server offers a wide variety of Linux options that make it a great place and platform for modern developers.
If you’re a server engineer that needs to run node.js, Ruby, Python, Perl, Bash scripts or other tools that expect Linux behaviors, environment or filesystem-layout, the ability to install and run Linux with WSL expands the tools at your disposal on Windows Server.
What this isn’t — WSL is not a Linux server
Just as with WSL on Windows Client, you can run daemons and jobs like MySQL, PostgreSQL, sshd, etc., via an interactive shell, but you cannot currently use WSL to run persistent Linux services, daemons, jobs, etc. as background tasks.
For these sorts or tools, read more about Linux containers with Hyper-V isolation from the Build 2017 announcement.
How do I get started using WSL on Server?
Windows Subsystem for Linux arrived in Windows Server Insider Build 16237. Follow our new Windows Server WSL Installation Instructions to get started running Linux alongside Cmd and PowerShell on your Servers.
Feel free to comment below or reach out to Sarah and Rich via Twitter.

Creating Materials and Lights in the Visual Layer

In today’s post, we’re going wrap up this series by combining everything we’ve learned so far and take you through the steps in creating a custom material. We also have an amazing new tool to show you that empowers anyone to design a custom material. To see how you can use these custom materials in your XAML app, be sure to check out the last two posts in this series; XAML and Visual Layer Interop, part one and part two.

The Fluent Design Language is an evolving concept, rather than a one-time design language release like MDL and MDL2. It was designed to expand and grow as Microsoft and the community of creators (developers and designers), adds to what it could be. We’ll show you that anything is possible, as designers, developers and the Windows community have common ground to share their creations and create amazing materials.
Let’s get started by first showing you how a material is created by chaining effects, then we’ll explore using the new Material Creator to easily and quickly create materials.
Creating Material with the Visual Layer
Whether we’ll be using the effect in a XamlCompositionBrushBase or painting a SpriteVisual, the construction of the effect graph is the core of the approach. In order to create the material we want, we’ll need the following components:
Effect sources: A SurfaceBrush for the NormalMap and a BackdropBrush for access to the pixels underneath the material
The effect graph: A composite of different effects to control the Material’s reflectance properties and filter effects (such as blur and tint) to customize for UI usage
Lighting: The Visual is in a scene that has a CompositionLight applied
Let’s start with the first source, a SurfaceBrush. This will be provided by using LoadedImageSurface to load a NormalMap.
NormalMap and LoadedImageSurface

If you’ve had any experience with 3D computer graphics, maybe as a game developer, you may already know what a normal map is and the image above looks familiar. If you’ve never worked with one before, Normal mapping is a technique that determines the reflectance of light at every pixel (read more about Normal mapping here). The Visual Layer in Windows 10 gives you a choice of industry standard reflectance models; Blinn-Phong and Physically Based Blinn-Phong. Go here to read more about the math behind how this is done.
For today’s demo, we used a 2D picture of a textured surface and transformed it into a Normal map image using an image editor. There are many image editing tools that let you do this. You can use any one you prefer to create your image.
To get started, we can load the image using the new LoadedImageSurface API. Let’s add a private field for the LoadedImageSurface, load the image and create a CompositionSurfaceBrush with it.

// Load NormalMap onto an ICompositionSurface using LoadedImageSurface
LoadedImageSurface _surface = LoadedImageSurface.StartLoadFromUri(new Uri("ms-appx:///Images/NormalMap.jpg"), new Size(512,384));

// Load Surface onto SurfaceBrush
CompositionSurfaceBrush normalMap = compositor.CreateSurfaceBrush(_surface);
normalMap.Stretch = CompositionStretch.Uniform;

Now we’re ready to move on to creating and chaining effects.
Chaining Effects to create the effect graph
For our material, we are going to create a chain of effects that leverages the Win2D’s ArithmeticCompositeEffect (note: be sure to add the Win2D NuGet package to your project). All effects can be used as input sources for other effects, thus enabling you to allow a chain of effects to one or more inputs.
ArithmeticCompositeEffect lets you assign two sources for the composite, giving each one a weight toward the final effect. For example, Source1 at 0.75 (75%) and Source2 at 0.25 (25%). You can also use an additional ArithmeticCompositeEffect as one of the sources to add more effects in the composite chain.
Let’s step back for a minute and think about how we want create the composite:
Parent ArithmeticCompositeEffect to be used for Brush
Source 1: Child ArithmeticCompositeEffect
Source 1: ColorSourceEffect for tint coloring
Source 2: GaussianBlurEffect using the BackDropBrush for its source

Source2: SceneLightingEffect using the Normal map for its source

For source 1, we’ll combine a ColorSourceEffect and GaussianBlurEffect (from Win2D) with a nested ArithmeticSourceEffect. For Source 2, we’ll use a SceneLightingEffect (from Windows.UI.Composition.Effects). This will manipulate the reflective properties of the effect’s source when a CompositionLight, from a XamlLight for example, is applied.
Note that the SceneLightingEffect is used to modify the default lighting applied to the contents of a SpriteVisual targeted by a CompositionLight. In today’s example, we are going to create SurfaceBrush using a NormalMap (loaded by LoadedImageSurface) to define dents and bumps that the light reflects off of.
Furthermore, in order to use the SceneLightingEffect, the content being modified must be defined as one of the sources into a multi-input effect graph, with the other input being the SceneLightingEffect. For example, above, the content whose lighting properties are being modified is defined by Source1 of the parent ArithmeticCompositeEffect.
Here’s what the code looks like for the effect graph:

// Define Effect graph
const float glassLightAmount = 0.5f;
const float glassBlurAmount = 0.95f;
Color tintColor = Color.FromArgb(255, 128, 128, 128);

var graphicsEffect = new ArithmeticCompositeEffect
Name = "LightComposite",
Source1Amount = 1,
Source2Amount = glassLightAmount,
MultiplyAmount = 0,
// Nested Composite to combine the Blur and Color tint effects
Source1 = new ArithmeticCompositeEffect
Name = "BlurComposite",
Source1Amount = 1 – glassBlurAmount,
Source2Amount = glassBlurAmount,
MultiplyAmount = 0,
Source1 = new ColorSourceEffect
Name = "Tint",
Color = tintColor,
Source2 = new GaussianBlurEffect
BlurAmount = 20,
Source = new CompositionEffectSourceParameter("Backdrop"),
Optimization = EffectOptimization.Balanced,
BorderMode = EffectBorderMode.Hard,
// The SceneLighting effect, which will use a NormalMap
Source2 = new SceneLightingEffect
AmbientAmount = 0.15f,
DiffuseAmount = 1,
SpecularAmount = 0.1f,
NormalMapSource = new CompositionEffectSourceParameter("NormalMap")

Notice the SceneLightingEffect’s NormalMapSource property and the GaussianBlurEffect’s Source; these are parameter provided sources. We will set what these parameters are as we pull everything together to create the CompositionEffectBrush:

// Create EffectFactory and the CompositionEffectBrush
CompositionEffectFactory effectFactory = Window.Current.Compositor.CreateEffectFactory(graphicsEffect);
CompositionEffectBrush effectBrush = effectFactory.CreateBrush();

// Create BackdropBrush, this is used by the GaussianBlurEffect
CompositionBackdropBrush backdrop = Window.Current.Compositor.CreateBackdropBrush();

// Set Sources to Effect
effectBrush.SetSourceParameter("NormalMap", _normalMap);
effectBrush.SetSourceParameter("Backdrop", backdrop);

With the CompositionEffect completed, we can now use it to paint a SpriteVisual, like this:

SpriteVisual spriteVisual = Window.Current.Compositor.CreateSpriteVisual();
spriteVisual.Brush = effectBrush;

If you’re primarily a XAML dev, you can use this effect in a XamlCompositionBrushBase. Let’s take a look.
Using the CompositionEffectBrush in a XamlCompositionBrushBase
As I mentioned earlier, we can also create this effect graph in XamlCompositionBrushBase and set the XamlCompositionBrushBase’s CompsositionBrush property. If you haven’t read the post in this series on how to create a XamlCompositionBrushBase, go here to catch up.
As with the other XamlCompositionBrushBase implementations, we build the effect graph in the OnConnected method and make sure that the user’s device supports effects. This can be done using the AreEffectsSupported method of the CompositionCapabilities API.
Here’s the full class:

public sealed class MaterialBrush : XamlCompositionBrushBase
private LoadedImageSurface _surface;

protected override void OnConnected()
if (DesignMode.DesignModeEnabled) return;

Compositor compositor = Window.Current.Compositor;

// CompositionCapabilities: Are Effects supported?
bool usingFallback = !CompositionCapabilities.GetForCurrentView().AreEffectsSupported();
FallbackColor = Color.FromArgb(100, 60, 60, 60);

if (usingFallback)
// If Effects are not supported, use Fallback Solid Color
CompositionBrush = compositor.CreateColorBrush(FallbackColor);

// Load NormalMap onto an ICompositionSurface using LoadedImageSurface
_surface = LoadedImageSurface.StartLoadFromUri(new Uri("ms-appx:///Images/NormalMap.jpg"), new Size(512, 384));

// Load Surface onto SurfaceBrush
CompositionSurfaceBrush normalMap = compositor.CreateSurfaceBrush(_surface);
normalMap.Stretch = CompositionStretch.Uniform;

// Define Effect graph
const float glassLightAmount = 0.5f;
const float glassBlurAmount = 0.95f;
Color tintColor = Color.FromArgb(255, 128, 128, 128);

var graphicsEffect = new ArithmeticCompositeEffect()
Name = "LightComposite",
Source1Amount = 1,
Source2Amount = glassLightAmount,
MultiplyAmount = 0,
Source1 = new ArithmeticCompositeEffect()
Name = "BlurComposite",
Source1Amount = 1 – glassBlurAmount,
Source2Amount = glassBlurAmount,
MultiplyAmount = 0,
Source1 = new ColorSourceEffect()
Name = "Tint",
Color = tintColor,
Source2 = new GaussianBlurEffect()
BlurAmount = 20,
Source = new CompositionEffectSourceParameter("Backdrop"),
Optimization = EffectOptimization.Balanced,
BorderMode = EffectBorderMode.Hard,
Source2 = new SceneLightingEffect()
AmbientAmount = 0.15f,
DiffuseAmount = 1,
SpecularAmount = 0.1f,
NormalMapSource = new CompositionEffectSourceParameter("NormalMap")

// Create EffectFactory and EffectBrush
CompositionEffectFactory effectFactory = compositor.CreateEffectFactory(graphicsEffect);
CompositionEffectBrush effectBrush = effectFactory.CreateBrush();

// Create BackdropBrush
CompositionBackdropBrush backdrop = compositor.CreateBackdropBrush();

// Set Sources to Effect
effectBrush.SetSourceParameter("NormalMap", normalMap);
effectBrush.SetSourceParameter("Backdrop", backdrop);

// Set EffectBrush as the brush that XamlCompBrushBase paints onto Xaml UIElement
CompositionBrush = effectBrush;

protected override void OnDisconnected()
// Clean up resources
_surface = null;

CompositionBrush = null;

To see this in action, let’s create a Grid to put in the middle of our page’s root Grid and set an instance of our MaterialBrush to that Grid’s Background brush:

<Grid Background="Gray">
<Grid Width="580"

<!– Our new MaterialBrush –>
<brushes:MaterialBrush />

Here’s what it would look like if you ran the app now:

This is because you’re missing the second part of the approach, the lighting!
Illuminating the Material with Lights
In the last post, we created two lights (an AmbientLight “AmbLight” and the SpotLight “HoverLight”). We’ll use them today to apply lighting to the UIElement that is using our custom material.
Since our MaterialBrush uses the new SceneLightingEffect with a Normal map, any lights applied will enhance the material per the SceneLightingEffect’s configuration. Note that this isn’t necessary, but can greatly enhance your material. For example, if you’re using an Acrylic material in your app, adding Reveal will enhance the Acrylic.
Let’s now add the two XamlLights to the Grid:

<Grid Background="Gray">
<Grid Width="580"
<brushes:MaterialBrush />

<!– Added lights –>
<lights:HoverLight />
<lights:AmbLight />

Now, this is what you’ll see at runtime:

What if it were easier to create and experiment with new materials? What if there were a tool that anyone can use? Let’s take a look at what’s coming to the WindowsUIDevLabs GitHub repo, the Material Creator tool.
Using the new Material Creator
Introducing availability of the new Material Creator tool!

Creating custom materials may sometimes requires a bit of experimentation and tweaking to get the effect’s property configuration just right. This would take time if you had to constantly tweak and redeploy your app. What if there were a way that you could change effect properties and material layers in real time?
The Material Creator can be found on the WindowsUIDevLabs GitHub repo in the demos folder here. (Note: you need to be running Windows 10, build 16225 or higher to use the Material Creator).
Generating the SceneLightingEffect code

One of the great features of the tool is being able to see the effect graph after you’re done creating the material. Click the ellipsis next to the save button and select “view effect graph” to see the C# code for the SceneLightingEffect. You can then copy and use this code directly in your custom material class.
If you go back up to the part of this article where we created the ArithemticCompositeEffect that contains a SceneLightingEffect, that’s where you can use this code!
Saving and Loading Materials
The Material Editor can also save and load materials! If you want to save your current progress on a material, or share a completed material with another developer, just click the Save button and it will create a json file containing all the layers and effect configurations. To load an existing material or edit a material shared with you, just click Load and select the json file.
The key takeaway is that you don’t need to be a developer to create materials. A designer can create a material and then share the saved json file with a developer for implementation in a XamlCompositionBrushBase. Even Windows enthusiasts, like the Windows Insiders, can start building out a universe of materials to drive the evolution of Fluent Design.
Blog Series Wrap up: The future of Fluent Design materials
Acrylic and Reveal are stunning examples of how using Material with Lights can alter the Windows experience, but they’re just the beginning of what is possible. The vision for the Fluent Design Language going forward is that developers and designers can easily build custom materials, innovate and share as well.
The message we want you to walk away with is that you can build new Materials, for a couple primary reasons:
Because you’re a Creator
Because it embodies your brand
We look forward to seeing what kinds of materials you create for your Windows apps! If you’ve already built your own material, feel free to share in the comments below.
Demo Code:
Material Creator
Brush Interop
Light Interop

Blog posts: XAML and Visual Layer Interop series
Part One
Part Two

Lighting Overview
CompositionCapabilities API
Light Types
Mathematics of lighting
WindowsUIDevLabs GitHub repo (contains Samples Gallery app)
Win2D Effects API Documentation

Powered by WPeMatico