Category Archives: Fall Creators Update

Auto Added by WPeMatico

The UWP Community Toolkit v2.1

We are extremely excited to announce the latest update to the UWP Community Toolkit, version 2.1!
This update builds on top of the previous version and continues to align the toolkit closer to the Windows 10 Fall Creators Update SDK. Thanks to the continued support and help of the community, all packages have been updated to target the Fall Creators Update, several controls, helpers, and extensions have been added or updated, and the documentation and design time experience have been greatly improved.
Below is a quick list of few of the major updates in this release. Head over to the release notes for a complete overview of what’s new in 2.1.
DockPanel
This release introduces the DockPanel control that provides an easy docking of elements to the left, right, top, bottom or center.

#DockPanel is now part of #UwpToolkit get the pre-release from here https://t.co/ccEz8R6qSa thanks to @metulev & @dotMorten for their review pic.twitter.com/Gfp566kFAE
— Ibraheem Osama (@IbraheemOM) November 2, 2017

HeaderedContentControl and HeaderedItemsControl
There are now two controls, HeaderedContentControl and HeaderedItemsControl that allow content to be easily displayed with a header that can be templated.

<controls:HeaderedContentControl Header="Hello header!">
<Grid Background="Gray">

</Grid>
</<controls:HeaderedContentControl>

Connected and Implicit Animation in XAML
There are two new sets of XAML attached properties that enable working with composition animations directly in XAML
Implicit animations (including show and hide) can now be directly added to the elements in XAML

<Border extensions:VisualExtensions.NormalizedCenterPoint="0.5">

<animations:Implicit.ShowAnimations>
<animations:TranslationAnimation Duration="0:0:1"
To="0, 100, 0" ></animations:TranslationAnimation>
<animations:OpacityAnimation Duration="0:0:1"
To="1.0"></animations:OpacityAnimation>
</animations:Implicit.ShowAnimations>

</Border>

Connected animations can now be defined directly on the element in XAML by simply adding the same key on elements on different pages

<!– Page 1 –>
<Border x:Name="Element" animations:Connected.Key="item"></Border>

<!– Page 2 –>
<Border x:Name="Element" animations:Connected.Key="item"></Border>

Improved design time experience
Added designer support for controls, including toolbox integration and improved design time experience by placing properties in the proper category in the properties grid with hover tooltip.

Added @VisualStudio Toolbox integration to #UWPToolkit: https://t.co/SZ6Tf3b0cf #DragNDropLikeItsHot pic.twitter.com/G4s73wXUsi
— Morten Nielsen (@dotMorten) August 31, 2017

New SystemInformation properties
SystemInformation class now includes new properties and methods to make it easier to provide first run (or related) experiences or collect richer analytics.

The #uwptoolkit got some new SystemInformation properties fresh from the oven thanks to @mrlacey. What would you use these for? https://t.co/cFjGWSBxPX pic.twitter.com/Pft6nWbx0M
— Nikola Metulev (@metulev) October 13, 2017

Easy transition to new Fall Creators Update controls
To enable a smooth transition from existing toolkit controls to the new Fall Creators Update controls, the HamburgeMenu and SlidableListItem have new properties to use the NavigationView and SwipeControl respectively when running on Fall Creators Update. Take a look at the documentation on how this works.
Documentation
All documentation is now available at Microsoft docs. In addition, there is new API documentation as part of .NET API Browser.
Built by the Community
This update would not have been possible if it wasn’t for the community support and participation. If you are interested in participating in the development, but don’t know how to get started, check out our “help wanted” issues on GitHub.
As a reminder, although most of the development efforts and usage of the UWP Community Toolkit is for Desktop apps, it also works great on Xbox One, Mobile, HoloLens, IoT and Surface Hub devices. You can get started by following this tutorial, or preview the latest features by installing the UWP Community Toolkit Sample App from the Microsoft Store.
To join the conversation on Twitter, use the #uwptoolkit hashtag.

New DirectX 12 features in Windows 10 Fall Creators Update

We’ve come a long way since we launched DirectX 12 with Windows 10 on July 29, 2015. Since then, we’ve heard every bit of feedback and improved the API to enhance stability and offer more versatility. Today, developers using DirectX 12 can build games that have better graphics, run faster and that are more stable than ever before. Many games now run on the latest version of our groundbreaking API and we’re confident that even more anticipated, high-end AAA titles will take advantage of DirectX 12.
DirectX 12 is ideal for powering the games that run on PC and Xbox, which is the most powerful console on the market. Simply put, our consoles work best with our software: DirectX 12 is perfectly suited for native 4K games on the Xbox One X.
In the Windows 10 Fall Creators Update, we’ve added features that make it easier for developers to debug their code. In this article, we’ll explore how these features work and offer a recap of what we added in Windows 10 Creators Update.
But first, let’s cover how debugging a game or a program utilizing the GPU is different from debugging other programs.
As covered previously, DirectX 12 offers developers unprecedented low-level access to the GPU (check out Matt Sandy’s detailed post for more info). But even though this enables developers to write code that’s substantially faster and more efficient, this comes at a cost: the API is more complicated, which means that there are more opportunities for mistakes.
Many of these mistakes happen GPU-side, which means they are a lot more difficult to fix. When the GPU crashes, it can be difficult to determine exactly what went wrong. After a crash, we’re often left with little information besides a cryptic error message. The reason why these error messages can be vague is because of the inherent differences between CPUs and GPUs. Readers familiar with how GPUs work should feel free to skip the next section.
The CPU-GPU Divide
Most of the processing that happens in your machine happens in the CPU, as it’s a component that’s designed to resolve almost any computation it it’s given. It does many things, and for some operations, foregoes efficiency for versatility. This is the entire reason that GPUs exist: to perform better than the CPU at the kinds of calculations that power the graphically intensive applications of today. Basically, rendering calculations (i.e. the math behind generating images from 2D or 3D objects) are small and many: performing them in parallel makes a lot more sense than doing them consecutively. The GPU excels at these kinds of calculations. This is why game logic, which often involves long, varied and complicated computations, happens on the CPU, while the rendering happens GPU-side.
Even though applications run on the CPU, many modern-day applications require a lot of GPU support. These applications send instructions to the GPU, and then receive processed work back. For example, an application that uses 3D graphics will tell the GPU the positions of every object that needs to be drawn. The GPU will then move each object to its correct position in the 3D world, taking into account things like lighting conditions and the position of the camera, and then does the math to work out what all of this should look like from the perspective of the user. The GPU then sends back the image that should be displayed on system’s monitor.

To the left, we see a camera, three objects and a light source in Unity, a game development engine. To the right, we see how the GPU renders these 3-dimensional objects onto a 2-dimensional screen, given the camera position and light source. 
For high-end games with thousands of objects in every scene, this process of turning complicated 3-dimensional scenes into 2-dimensional images happens at least 60 times a second and would be impossible to do using the CPU alone!
Because of hardware differences, the CPU can’t talk to the GPU directly: when GPU work needs to be done, CPU-side orders need to be translated into native machine instructions that our system’s GPU can understand. This work is done by hardwire drivers, but because each GPU model is different, this means that the instructions delivered by each driver is different! Don’t worry though, here at Microsoft, we devote a substantial amount of time to make sure that GPU manufacturers (AMD, Nvidia and Intel) provide drivers that DirectX can communicate with across devices. This is one of the things that our API does; we can see DirectX as the software layer between the CPU and GPU hardware drivers.
Device Removed Errors
When games run error-free, DirectX simply sends orders (commands) from the CPU via hardware drivers to the GPU. The GPU then sends processed images back. After commands are translated and sent to the GPU, the CPU cannot track them anymore, which means that when the GPU crashes, it’s really difficult to find out what happened. Finding out which command caused it to crash used to be almost impossible, but we’re in the process of changing this, with two awesome new features that will help developers figure out what exactly happened when things go wrong in their programs.
One kind of error happens when the GPU becomes temporarily unavailable to the application, known as device removed or device lost errors. Most of these errors happen when a driver update occurs in the middle of a game. But sometimes, these errors happen because of mistakes in the programming of the game itself. Once the device has been logically removed, communication between the GPU and the application is terminated and access to GPU data is lost.
Improved Debugging: Data
During the rendering process, the GPU writes to and reads from data structures called resources. Because it takes time to do translation work between the CPU and GPU, if we already know that the GPU is going to use the same data repeatedly, we might as well just put that data straight into the GPU. In a racing game, a developer will likely want to do this for all the cars, and the track that they’re going to be racing on. All this data will then be put into resources. To draw just a single frame, the GPU will write to and read from many thousands of resources.
Before the Fall Creators Update, applications had no direct control over the underlying resource memory. However, there are rare but important cases where applications may need to access resource memory contents, such as right after device removed errors.
We’ve implemented a tool that does exactly this. Developers with access to the contents of resource memory now have substantially more useful information to help them determine exactly where an error occurred. Developers can now optimize time spent trying to determine the causes of errors, offering them more time to fix them across systems.
For technical details, see the OpenExistingHeapFromAddress documentation.
Improved Debugging: Commands
We’ve implemented another tool to be used alongside the previous one. Essentially, it can be used to create markers that record which commands sent from the CPU have already been executed and which ones are in the process of executing. Right after a crash, even a device removed crash, this information remains behind, which means we can quickly figure out which commands might have caused it—information that can significantly reduce the time needed for game development and bug fixing.
For technical details, see the WriteBufferImmediate documentation.
What does this mean for gamers? Having these tools offers direct ways to detect and inform around the root causes of what’s going on inside your machine. It’s like the difference between trying to figure out what’s wrong with your pickup truck based on hot smoke coming from the front versus having your Tesla’s internal computer system telling you exactly which part failed and needs to be replaced.
Developers using these tools will have more time to build high-performance, reliable games instead of continuously searching for the root causes of a particular bug.
Recap of Windows 10 Creators Update
In the Creators Update, we introduced two new features: Depth Bounds Testing and Programmable MSAA. Where the features we rolled out for the Windows 10 Fall Creators Update were mainly for making it easier for developers to fix crashes, Depth Bounds Testing and Programmable MSAA are focused on making it easier to program games that run faster with better visuals. These features can be seen as additional tools that have been added to a DirectX developer’s already extensive tool belt.
Depth Bounds Testing
Assigning depth values to pixels is a technique with a variety of applications: once we know how far away pixels are from a camera, we can throw away the ones too close or too far away. The same can be done to figure out which pixels fall inside and outside a light’s influence (in a 3D environment), which means that we can darken and lighten parts of the scene accordingly. We can also assign depth values to pixels to help us figure out where shadows are. These are only some of the applications of assigning depth values to pixels; it’s a versatile technique!
We now enable developers to specify a pixel’s minimum and maximum depth value; pixels outside of this range get discarded. Because doing this is now an integral part of the API and because the API is closer to the hardware than any software written on top of it, discarding pixels that don’t meet depth requirements is now something that can happen faster and more efficiently than before.
Simply put, developers will now be able to make better use of depth values in their code and can free GPU resources to perform other tasks on pixels or parts of the image that aren’t going to be thrown away.
Now that developers have another tool at their disposal, for gamers, this means that games will be able to do more for every scene.
For technical details, see the OMSetDepthBounds documentation.
Programmable MSAA
Before we explore this feature, let’s first discuss anti-aliasing.
Aliasing refers to the unwanted distortions that happen during the rendering of a scene in a game. There are two kinds of aliasing that happen in games: spatial and temporal.
Spatial aliasing refers to the visual distortions that happen when an image is represented digitally. Because pixels in a monitor/television screen are not infinitely small, there isn’t a way of representing lines that aren’t perfectly vertical or horizontal on a monitor. This means that most lines, instead of being straight lines on our screen, are not straight but rather approximations of straight lines. Sometimes the illusion of straight lines is broken: this may appear as stair-like rough edges, or ‘jaggies’, and spatial anti-aliasing refers to the techniques that programmers use to make these kinds edges smoother and less noticeable. The solution to these distortions is baked into the API, with hardware-accelerated MSAA (Multi-Sample Anti-Aliasing), an efficient anti-aliasing technique that combines quality with speed. Before the Creators Update, developers already had the tools to enable MSAA and specify its granularity (the amount of anti-aliasing done per scene) with DirectX.

Side-by-side comparison of the same scene with spatial aliasing (left) and without (right). Notice in particular the jagged outlines of the building and sides of the road in the aliased image. This still was taken from Forza Motorsport 6: Apex.
But what about temporal aliasing? Temporal aliasing refers to the aliasing that happens over time and is caused by the sampling rate (or number of frames drawn a second) being slower than the movement that happens in scene. To the user, things in the scene jump around instead of moving smoothly. This YouTube video does an excellent job showing what temporal aliasing looks like in a game.
In the Creators Update, we offer developers more control of MSAA, by making it a lot more programmable. At each frame, developers can specify how MSAA works on a sub-pixel level. By alternating MSAA on each frame, the effects of temporal aliasing become significantly less noticeable.
Programmable MSAA means that developers have a useful tool in their belt. Our API not only has native spatial anti-aliasing but now also has a feature that makes temporal anti-aliasing a lot easier. With DirectX 12 on Windows 10, PC gamers can expect upcoming games to look better than before.
For technical details, see the SetSamplePositions documentation.
Other Changes
Besides several bugfixes, we’ve also updated our graphics debugging software, PIX, every month to help developers optimize their games. Check out the PIX blog for more details.
Once again, we appreciate the feedback shared on DirectX 12 to date, and look forward to delivering even more tools, enhancements and support in the future.
Happy developing and gaming!

Windows Developer Day Returns!

Windows Developer Day is back! Join us on October 10, starting at 9:30 AM PDT via live stream, or attend a viewing party in your area (location list below), as we explore what’s new in the Windows 10 Fall Creators Update for developers.
The day’s schedule features an introductory keynote by Kevin Gallo and members of the Windows engineering team, a live-streamed Q&A session and several streaming sessions diving deeper into the current Windows 10 update.
Learn what’s new for developers in the Windows 10 Fall Creators Update
No matter what you’re working on, you’ll find plenty of new features and improvements to make your software more compelling:
Game devs
Game Mode and new performance enhancements improve the gameplay experience for most of your players.
Xbox Live Creators Program lets you integrate Xbox Live into your game and publish to both Xbox One and Windows 10.
Mixer is the only next gen streaming service that offers viewers real-time influence and participation in your players’ live streams.
Windows Store improvements help you promote your games with video trailers and control timing and pricing more precisely.
Commercial devs
.NET Standard 2.0 adds more than 20,000 new APIs and lets you share code across all your .NET code base.
Xamarin lets you use your existing C# and .NET skills to build truly cross-platform apps for iOS, Android and Windows 10 devices.
Desktop Bridge improvements to tooling and more make it much easier to convert your existing Win32 and .NET software to Windows 10.
Windows Mixed Reality delivers new levels of immersion to help you enhance the visual experience of your users.
Consumer devs
Microsoft Graph and UserActivity API make your end-to-end experience seamless by connecting screens and experiences across devices and platforms.
Fluent Design System helps you engage your users continuously across all their devices with beautiful, expressive experiences.
Tooling improvements within Visual Studio make it easier to create, convert and deploy your software.
.NET Standard 2.0 adds more than 20,000 new APIs and lets you share code across all your .NET code base.
Live Stream Viewing Parties
Join other developers from your local developer community and attend a Live Stream Viewing Party hosted by a Microsoft Windows Development MVP. Enjoy refreshments, watch the live stream, participate in the Live Q&A alongside your peers and make new community connections!
Here is a list of the locations: 
Amstelveen, Netherlands
Bonstetten, Zurich, Switzerland
Boston, MA, USA
Chicago, IL, USA
Cologne, Germany
Dresden, Germany
Durban, South Africa
Ghent, Belgium
Manchester UK
Mexico City, Mexico
Milan, Italy
Milwaukee, WI, USA
Moscow, Russia
Munich, Germany
Paris, France
Penang, Malaysia
Santa Cruz de Tenerife, Spain
Singapore, Singapore
Stockholm, Sweden
Vienna, Austria
Zagreb, Croatia
Learn more about Windows Developer Day and sign up here!

New Tools in Windows Device Portal for the Windows 10 Fall Creators Update

In the Windows 10 Fall Creators Update, Device Portal now offers several new tools from across Windows to help you location test your UWP, explore Mixed Reality, build new hardware peripherals and test your apps new installation pipeline. It’s a little bit of goodness for everyone, and we’re excited to share these with you.
If you’re not familiar with Device Portal, you can check out the blog posts below to see what other tools you can find in Device Portal, or look at the new docs.microsoft.com to learn how to enable it.
And as always, all of these tools are backed by a REST API, so that you can use it from a scripting or client application environment using the Device Portal Wrapper.
Location Based Testing
Most of us don’t have the travel budgets to test our apps across the world – but pretending to travel is almost as good!  The Location tool in Device Portal lets you easily change the location that Windows reports to apps. By tapping the “Override” check box, you can swap out the device location for whatever you set using the map or lat/long text boxes. Be sure to uncheck the box when you’re done so that your location (and timezone) come back to reality – every vacation must end…

Figure 1: The News app keeping me up to date with local headlines!
This also works for web pages in Microsoft Edge, letting you test your webpages in different parts of the world.
Some notes on what this tool can and cannot do:
This doesn’t change the locale of your PC! So the News app above still saw an EN-US user in the middle of Italy.
You may not see all apps using this location. Some programs don’t use the Windows API to determine location or have special logic (e.g. using your IP address) to determine your location.
This tool marks the PositionSource of the location data as “Default.” Some apps may check for the source and alter their behavior based on it.
Happy travels!
USB Diagnostics
This one goes out to all the hardware folks – if “HLK” or “WDK” sound familiar, you might find this handy. The USB team has updated the USBView tool to work inside Device Portal, so developers working on new hardware can have more tooling at their fingertips.
The USB Devices tool can be a bit tricky to find – head to the hamburger menu in the top right, and go to “Add tools to workspace.”  Scroll to the bottom and check the “USB Devices” box, then hit “Add.” And voila – a full view of your systems USB hubs, controllers and peripherals. The hubs and controllers expand to show individual devices using the + (plus) sign, and clicking the gear will expand to show the items properties.

Streaming App Install Debugging
The Windows 10 Creators Update added ““streaming installation” for UWP, which allows a user to launch the app before it finished downloading. In order to make this easy to test, the App Model team has added a Streaming Install Debugger tool to Device Portal. To use it, deploy an app with content groups to the device, then open the Streaming Install Debugger. In it you’ll be able to edit the states of the content groups so you can test your apps behavior as streaming install is being simulated and ensure it behaves correctly when content groups are missing.

For more details, check out Andy Liu’s blog posts about the new App Installer and Streaming Install Debugger tools.
Mixed Reality Tooling
One of the bigger splashes in the Fall Creators Update is the addition of Mixed Reality to Windows Desktop. As part of that release, we’re including a suite of tools to help developers build great Mixed Reality apps. Two of these tools may look familiar to HoloLens developers – 3D View and a Framerate counter. There’s also a new app launch option that appears when you have an immersive headset attached to your PC, which lets you launch your app in Mixed Reality.
Frame rate is an important factor in making mixed reality apps comfortable, and it’s important for developers to optimize performance to hit full frame rate on the systems they support. The Frame Rate tool in the Device Portal helps by showing developers both the frame rate of their app and of the system’s compositor.

The 3D View helps when testing your immersive headset’s interactions with the real world, displaying its position as it moves through space.

Finally, what good is tooling if you can’t actually run your app in your immersive headset? Now, when you have an immersive headset attached, the Installed Apps tool will add a button letting you launch the app in the HMD. While fully immersive apps will always run in Mixed Reality, this new button is particularly useful for 2D UWP apps (or apps that switch between 2D and immersive) when you want to test them in Mixed Reality.

As always, if you have ideas for Device Portal that would help you write or debug apps, please leave us a note on our UserVoice or upvote an existing request. If you run into bugs, please file it with us via the Feedback Hub.
Related Posts:
Using Device Portal to view debug logs for UWP
Using the App File Explorer to see your app data