Tag Archives: IoT

Smooth Interaction and Motion with the Visual Layer in Windows 10 Creators Update

The Composition APIs come with a robust animation engine that provides quick and fluid motion running in a separate process from your Universal Windows Platform (UWP) app. This ensures a consistent 60 frames per second when running your app on an IoT device as well as on a screaming gaming machine. It is, quite simply, fast. This is an essential capability for implementing the Fluent Design System which calls on us to create a sense of cinematic motion in our UWP apps.

The Composition APIs also provide something you probably have never had access to before: the ability to create high-performing, low-level manipulation-driven custom animations like the one shown above.  In the same way that we that want our visuals to be fast and smooth, we want our touch interactions to be sticky and responsive. Moving a visual with a finger or a digital pen should result in the visual element clinging to us no matter how fast we push and pull it across the display.

Even if a motion looks good, it also needs to feel good under the finger. It needs to maintain the illusion that we are interacting with a real object. It ought to possess the proper physics so that when we drag a visual across the screen and let go, it continues with the proper inertial movement. Similarly, user controls should provide the right amount of resistance when we pull and release them.

A fast and fluid animation system

The Visual Layer supports both keyframe animations as well as expression animations. If you have worked with XAML animations before, then you are probably already familiar with how keyframes work. In a keyframe animation, you set values for some property you want to change over time and also assign the duration for the change: in the example below, a start value, a middle value and then an ending value. The animation system will take care of tweening your animation – in other words, generating all the values between the ones you have explicitly specified based on the easing function you select. Whether Linear, or a Cubic Bezier, the animation system will use that to determine the values when interpolating.

CubicBezierEasingFunction cubicBezier = _compositor.CreateCubicBezierEasingFunction(new Vector2(.17f, .67f), new Vector2(1f, 1f));
ScalarKeyFrameAnimation blurAnimation = _compositor.CreateScalarKeyFrameAnimation();
blurAnimation.InsertKeyFrame(0.0f, 0.0f);
blurAnimation.InsertKeyFrame(0.5f, 100.0f);
blurAnimation.InsertKeyFrame(1.0f, 0.0f);
blurAnimation.Duration = TimeSpan.FromSeconds(4);
blurAnimation.IterationBehavior = AnimationIterationBehavior.Forever;
_brush.StartAnimation("Blur.BlurAmount", blurAnimation);

A keyframe animation is a fire-and-forget mechanism that is time based. There are situations, however, when you need your animations to be coordinated and driving each other instead of simply moving in synchronized fashion.

In the animation above (source code), each gray gear is animated based on the animation of the gear preceding it. If the preceding gear suddenly goes faster or reverses direction, it forces the following gear to do the same. Keyframe animations can’t create motion effects that work in this way, but expression animations can. They are able to do so because, while keyframe animations are time based, expression animations are reference based.

The critical code that hooks up the gears for animation is found in the following code sample, which uses the new Expression Builder Library—an open source component released alongside of the Creators Update to construct expression animations. The expression below says that the animation should reference and be driven by the RotationAngleInDegrees property of the Visual that is indicated by the parameter “previousGear”. In the next line, the current Visual’s RotationAngleInDegrees property is finally animated based on the value referred to in an expression.

private void ConfigureGearAnimation(Visual currentGear, Visual previousGear)
    // If rotation expression is null then create an expression of a gear rotating the opposite direction

    var _rotateExpression = previousGear.GetReference().RotationAngleInDegrees;

    // Start the animation based on the Rotation Angle in Degrees.
    currentGear.StartAnimation("RotationAngleInDegrees", _rotateExpression);

But if an animation can be driven by another animation, you may be wondering, couldn’t we also drive an animation with something more concrete like user input? Why, yes. Yes, we can.

The beauty of the ScrollViewer ManipulationPropertySet

Driving an animation from a ScrollViewer using XAML-Composition interop is fairly easy. With just a few lines of code, you can enhance the visuals of a pre-existing ScrollViewer control with a CompositionAnimation by taking advantage of the GetScrollViewerManipulationPropertySet method on the ElementCompositionPreview class. Using an animation expression, you can tie your animation to the Position of your ScrollViewer component.

You would use this technique if you wanted to add a parallax effect to your XAML or to create a sticky header that stays in place as content scrolls beneath it. In the demo illustrated below (source code), a ScrollViewer is even used to drive a parallax effect on a ListView.

Adding parallax behavior to a XAML page can be accomplished in just a few lines.

// Note: We're not using the ScrollViewer's offset values directly. Instead, we use this PropertySet which holds the position values of the ScrollViewer in real-time.
var scrollPropSet = _scrollProperties.GetSpecializedReference<ManipulationPropertySetReferenceNode>();
var startOffset = ExpressionValues.Constant.CreateConstantScalar("startOffset", 0.0f);
var parallaxValue = 0.5f;
var itemHeight = 0.0f;
var parallax = (scrollPropSet.Translation.Y + startOffset - (0.5f * itemHeight));
_parallaxExpression = parallax * parallaxValue - parallax;
_parallaxExpression.SetScalarParameter("StartOffset", (float)args.ItemIndex * visual.Size.Y / 4.0f);
visual.StartAnimation("Offset.Y", _parallaxExpression);

The even more beautiful InteractionTracker

Driving expression animations with a ScrollViewer is extremely powerful, but what if you want to drive animations using touch gestures that aren’t limited to a pan/zoom gesture? Additionally, when using the ScrollViewer’s manipulations, your animations are linked to the UI thread responsiveness and can lose that buttery-smooth feel when the UI thread gets bogged down.

What if you want to pull items toward you with your finger, as in the demo below (source code), or animate multiple flying images across and into the screen as happens in the demo at the top of this post (source code)?

In order to achieve these effects, you would use the new InteractionTracker and VisualInteractionSource classes. InteractionTracker is a state machine that can be driven by active input. InteractionTracker also maintains a series of properties like Position and ScalePosition as part of maintaining the state. This is what you hook up to your animations. The VisualInteractionSource class, on the other hand, determines what kind of input you will use to drive your InteractionTracker and also when to start handling input (touch in particular).

The following sample code demonstrates a basic implementation of an InteractionTracker. The viewportVisual is simply the backing Visual for the root element on the page. You use this as the VisualInteractionSource for the tracker. In doing so, you specify that you are tracking X and Y manipulations. You also indicate that you want to track inertial movement.

_tracker = InteractionTracker.Create(_compositor);

var interactionSource = VisualInteractionSource.Create(viewportVisual);

interactionSource.PositionXSourceMode = InteractionSourceMode.EnabledWithInertia;
interactionSource.PositionYSourceMode = InteractionSourceMode.EnabledWithInertia;


Hooking the tracker up to an expression animation works basically the same way as hooking up a gear Visual to another gear Visual, as you did earlier. You call the CreateExpressionAnimation factory method on the current Compositor and reference the Position property of the tracker.

ar positionExpression = _compositor.CreateExpressionAnimation("-tracker.Position");
positionExpression.SetReferenceParameter("tracker", _tracker);

contentVisual.StartAnimation("Offset", positionExpression);

This code uses the InteractionTracker’s position to produce a smooth animation for the Offset of the Visual. You can also power your Blur and Opacity animations for your other Visuals as well. This will have a result where all three animations work together, with values based on how far the user dragged their finger, to result in an amazingly fluid visual experience. Run the demo and try it for yourself (source code).

Those are the basics of driving any animation from any input. What you do with this amazing new power is entirely up to you.

Wrapping up

Expression animations and Visual Layer Interactions are both topics that can become very deep very fast. To help you through these deeper waters, we highly recommend the following videos and articles:

Cortana Skills Kit empowers developers to build intelligent experiences for millions of users

Today, we are pleased to announce the public preview of the Cortana Skills Kit which allows developers to easily create intelligent, personalized experiences for Cortana.

Our vision for Cortana has always been to create a digital personal assistant that’s available to users across all their devices, whenever and wherever they may need an extra hand to be more productive and get things done. With the new Cortana Skills Kit, developers can join in delivering that vision and reach millions of Cortana users across platforms including Windows 10, Android, iOS and soon on even more devices and form factors — like Xbox, the Harman Kardon Invoke smart speaker and inside cars and mixed reality devices.

To build a Cortana skill, developers can create their bot’s conversational logic using the Microsoft Bot Framework, and publish it to the new Cortana Channel within the Bot Framework, bringing speech capabilities to skills. Developers can understand users’ natural input and build custom machine-learned language models through LUIS.ai, and add intelligence with the power of Cognitive Services.

Cortana has rich knowledge and understanding about the user with the Skills Kit. Developers can now access knowledge about the user and build highly-relevant, personalized experiences based on the user’s preferences and context. Cortana only shares information with the user’s consent.

We realize that we are at the dawn of building conversational experiences for end users. Developers want to reach a large and diverse set of users to understand user needs and behaviors. There are over 145M monthly active users of Cortana worldwide. With the Cortana Skills Kit, developers can immediately reach the 60M users in the US and grow their international reach in the future*. To start building skills today, please visit https://developer.microsoft.com/en-us/Cortana.

We are also excited to announce a wide range of partners who have joined us on this journey and are building Cortana skills. Cortana users will be able to access skills from OpenTable, Expedia, Capital One, StubHub,  Food Network, HP, iHeartRadio, Stubhub, Dominos, TuneIn, Uber, CapitalOne, Knowmail, MovieTickets.com, Tact, Skyscanner, Fresh Digital, Gigskr, Gupshup, The Motley Fool, Mybuddy, Patron, Porch, Razorfish, StarFish Mint, Talklocal, UPS, WebMD, Pylon, BigOven, CityFalcon, DarkSky, Elokence, BLT Robotics, Wed Guild, AI Games, XAPP Media,  GameOn, MegaSuperWeb, Verge and Vokkal.co.

To learn more and discover the currently available skills visit: https://www.microsoft.com/en-us/windows/cortana/cortana-skills/

*Available in US only. Other markets will be added over time.

Windows Developer Awards: Honoring Windows Devs at Microsoft Build 2017

As we ramp up for Build, the Windows Dev team would like to thank you, the developer community, for all the amazing work you have done over the past 12 months. Because of your efforts and feedback, we’ve managed to add countless new features to the Universal Windows Platform and the Windows Store in an ongoing effort to constantly improve. And thanks to your input on the Windows Developer Platform Backlog, you have helped us to prioritize new UWP features.

In recognition of all you have done, this year’s Build conference in Seattle will feature the first-ever Windows Developers Awards given to community developers who have built exciting UWP apps in the last year and published them in the Windows Store. The awards are being given out in four main categories:

  • App Creator of the Year – This award recognizes an app leveraging the latest Windows 10 capabilities. Some developers are pioneers, the first to explore and integrate the latest features in Windows 10 releases. This award honors those who made use of features like Ink, Dial, Cortana, and other features in creative ways.
  • Game Creator of the Year – This award recognizes a game by a first-time publisher in Windows Store. Windows is the best gaming platform–and it’s easy to see why. From Xbox to PCs to mixed reality, developers are creating the next generation of gaming experiences. This award recognizes developers who went above and beyond to publish innovative, engaging and magical games to the Windows Store over the last year.
  • Reality Mixer of the Year – This award recognizes the app demonstrating a unique mixed reality experience. Windows Mixed Reality lets developers create experiences that transcend the traditional view of reality. This award celebrates those who choose to mix their own view of the world by blending digital and real-world content in creative ways.
  • Core Maker of the Year – This award recognizes a maker project powered by Windows. Some devs talk about the cool stuff they could build–others just do it. This award applauds those who go beyond the traditional software interface to integrate Windows in drones, PIs, gardens, and robots to get stuff done.

In addition to these, a Ninja Cat of the Year award will be given as special recognition. Selected by the Windows team at Microsoft, this award celebrates the developer or experience that we believe most reflects what Windows is all about, empowering people of action to do great things.

Here’s what we want from you: we need the developer community to help us by voting for the winners of these four awards on the awards site so take a look and tell us who you think has created the most compelling apps. Once you’ve voted, check back anytime to see how your favorites are doing. Voting will end on 4/27, so get your Ninja votes in quickly.

Managing Windows IoT Core devices with Azure IoT Hub

Device management in Windows IoT Core

In Fall 2016, Microsoft announced Azure IoT Hub device management, providing the features and extensibility model, including an SDK for a wide range of platforms, to build robust device management solutions. With the recent release of the Windows 10 Creators Update, we are excited to announce the availability of the Windows IoT Azure DM Client Library. The open source library allows developers to easily add device management capabilities to their Azure connected Windows IoT Core device. Enterprise device management for Windows has been available for many years. The Windows IoT Azure DM Client Library makes these capabilities, such as device restart, certificate and application management, as well as many others, available via Azure IoT Hub device management.

A quick introduction

IoT devices, in comparison to desktops, laptops and phones, have in many cases a much more restricted connectivity, less local resources and in many cases no UI. Remote device management also requires devices to be provisioned for a DM service, adding another challenge to the device setup.

Azure IoT DM is designed for devices with resource and connectivity restrictions. Those devices will also use Azure IoT for their operation, so they need to be provisioned for Azure IoT. This makes Azure IoT DM a very attractive choice for remote device management for IoT devices.

Device management in Windows 10 is based on the Configuration Service Provider (CSP) model. A CSP is an interface in Windows that allows reading and modification of settings of a specific feature of the device. For example, a Wi-Fi profile can be configured with the Wi-Fi CSP, the Reboot CSP is used to configure reboot settings, and so on.

All the CSPs ultimately map into API calls, registry keys and changes in the file system. The CSPs raise the level of abstraction and offer a consistent interface that works on all editions of Windows – desktop, mobile and IoT. The Windows IoT Azure DM Client Library will use the same, proven infrastructure.

Windows IoT Core + Azure IoT Hub: Better together

Azure IoT Hub provides the features and an extensibility model that enable device and back-end developers to build robust device management solutions. Devices can report their state to the Azure IoT Hub and can receive desired state updates and management commands from the Azure IoT Hub.

Device management in Azure IoT is based on the concepts of the device twin and the direct methods. The device twins are JSON documents that store device state information (metadata, configurations and conditions). IoT Hub persists a device twin for each device that you connect to IoT Hub. The device twin contains the reported properties that reflect the current state of the device, and the desired properties that represent the expected configuration of the device. Direct methods allow the back-end to send a message to a connected device and receive a response.

The device twin and the direct methods can be used to support the business logic of your IoT solution as well as implementing the device management operations.

The Windows IoT Azure DM Client Library connects the CSP-based device management stack in Windows IoT Core with the cloud back-end based on Azure IoT Hub. The client runs on the device and translates the direct method calls and desired properties updates to the CSP calls. The client also queries the device state using the CSP calls and translates that into reported properties for the device twin in the Azure IoT Hub.

Before an IoT device can be managed through the Azure IoT Hub, it must be registered with a unique device identity and an authentication key. The authentication key needs to be securely stored on the device to prevent accidental or malicious duplication of the device identity. In Windows 10 IoT Core the key can be stored in the TPM. How this is done is described in the previous post Building Secure Apps for Windows IoT Core.

With the device provisioned with Azure IoT Hub credentials (connection information and authentication key), managing Windows 10 Core devices through Azure IoT Hub requires no additional enrollment or configuration.

In this post, we will focus mostly on the client aspects of the device management. Please refer to the general Azure IoT Hub device management documentation for a broader look at what the service provides. Below we explore how the Azure IoT Hub device twin and direct methods can be used to manage Windows IoT Core devices.

How to use the Windows IoT Azure DM Client Library

Devices connecting to Azure IoT Hub can only have one connection to the service. This means that all applications, including the DM library, must share an Azure IoT Hub connection. We will provide two sample implementations that you can use depending on if your device has other applications that will connect to the same IoT Hub, as the same device.

Standalone device management client

If your device only needs Azure IoT Hub for device management and no other application will connect to the same IoT Hub using the same Azure device ID, you can use the IoTDMBackground sample to add DM capabilities to your device.

The IoTDMBackground is a background app that can be deployed on your device. The IoTDMBackground app requires the device to be securely connected to Azure IoT. Once started, the IoTDMBackground will receive direct method calls and device twin updates from the Azure IoT Hub, and perform the device management operations.

Integrated device management client

There are scenarios where the capabilities of the standalone device management client are insufficient:

  1. Some device management, e.g. a device reboot or an application restart, might interrupt the normal operation of the device. In cases where this is not acceptable, the device should be able to declare itself busy and decline or postpone the operation.
  2. If your app is already connected to the Azure IoT Hub (for example, sending telemetry messages, receiving direct method calls and device twin updates), it cannot share its Azure identity with another app on the system, such as the IoTDMBackground.
  3. Some IoT devices expose basic device management capabilities to the user – such as the “check for updates” button or various configuration settings. Implementing this in your app is not an easy task even if you know which API or CSP you need to invoke.

The purpose of the integrated device management client is to address these scenarios. The integrated device management client is a .NET library that links to your IoT app. The library is called the IoTDMClientLib and is part of the IoTDM.sln solution. The library allows your app to declare its busy state, share device identity between itself and your app, and invoke some common device management operations.

To integrate the device management to your app, build the IoTDMClientLib project, which will produce the IoTDMClientLib.dll. You will reference it in your app.

The ToasterApp project in the IoTDM.sln solution is a sample application that uses the integrated client. You can study it and use it as an example, or if you prefer step-by-step instructions, follow the guidance below.

1. If your app is already connected to the Azure IoT Hub, you already have an instance of DeviceClient instantiated somewhere in your app. Normally it would look like this:

DeviceClient deviceClient =
   DeviceClient.CreateFromConnectionString(connectionString, TransportType.Mqtt);

2. Now use the DeviceClient object to instantiate the AzureIoTHubDeviceTwinProxy object for connecting your device management client to Azure IoT Hub:

IDeviceTwin deviceTwinProxy = new AzureIoTHubDeviceTwinProxy(deviceClient);

3. Your app needs to implement the IDeviceManagementRequestHandler interface which allows the device management client to query your app for busy state, app details and so on:

IDeviceManagementRequestHandler appRequestHandler = new MyAppRequestHandler(this);

You can look at ToasterDeviceManagementRequestHandler implementation for an example of how to implement the request handler interface.

Next, add the using Microsoft.Devices.Management statement at the top of your file, and the systemManagement capability to your application’s manifest (see ToasterAppPackage.appxmanifest file).

You are now ready to create the DeviceManagementClient object:

this.deviceManagementClient = await
    DeviceManagementClient.CreateAsync(deviceTwinProxy, appRequestHandler);

You can use this object to perform some common device management operations.

Finally, we will set up the callback that handles the desired properties updates (if your application already uses the device twin, it will already have this call):

await deviceClient.SetDesiredPropertyUpdateCallback(OnDesiredPropertyUpdate, null);

The callback will be invoked for all the desired properties – those specific to device management and those that are not. This is why we need to let the device management client filter out and handle properties that it is responsible for:

public Task OnDesiredPropertyUpdate(TwinCollection desiredProperties, 
        object userContext)
    // Let the device management client process properties 
    // specific to device management

    // App developer can process all the top-level nodes here
    return Task.CompletedTask;

As an app developer, you’re still in control. You can see all the property updates received by the callback but delegate the handling of the device management-specific properties to the device management client, letting your app focus on its business logic.

To deploy and run your app, follow the instructions here.

The end-to-end solution

Obviously, the entire device management solution requires two parts – the client running on the device and the back-end component running in the cloud. Typically, your back-end component will consist of the Azure IoT Hub, which is the entry point into the cloud for your devices, coupled with other Azure services that support the logic of your application – data storage, data analytics, web services, etc.

Fortunately, you don’t need to build a full solution to try out your client. You can use the existing tools such as the DeviceExplorer to trigger direct method calls and device twin changes for your devices.

For example, to send the immediate reboot command to your IoT device, call microsoft.management.immediateReboot direct method on your device:

The device management client running on the IoT device will respond to the direct method and (unless it is in busy state) proceed with rebooting the device.

The Windows IoT Azure DM Client Library supports a variety of device management operations listed in the documentation on the GitHub site. In addition to the reboot management, application management, update, factory reset and more are supported. The list of capabilities will grow as the project evolves.

The Windows IoT Azure DM Client Library includes a sample called the DM Dashboard, which hides the implementation detail of the device management operations. Unlike the Device Explorer, you don’t need to consult the documentation and manually craft JSON to use it.

Here is how you can invoke the reboot operation using the DM Dashboard tool:

The DM Dashboard is a convenient tool for testing the client side of your device management solution, but since it operates on one device at a time, it is not suitable for managing multiple devices in a production environment.

Next steps

The Windows IoT Azure DM Client Library is still in beta phase and will continue to evolve. We’re very interested in your feedback, and we want to learn about your IoT needs. So, head over to our GitHub page, clone the repo and tell us what you think.

ICYMI – Your weekly TL;DR

Busy weekend of coding ahead? Get the latest from this week in Windows Developer before you go heads down.

Standard C++ and the Windows Runtime (C++/WinRT)

The Windows Runtime (WinRT) is the technology that powers the Universal Windows Platform, letting developers write applications that are common to all Windows devices, from Xbox to PCs to HoloLens to phones. Check out how most of UWP can also be used by developers targeting traditional desktop applications.

New Year, New Dev – Windows IoT Core

Learn how easy it is to start developing applications to deploy on IoT devices such as the Raspberry Pi 3.

Project Rome for Android Update: Now with App Services Support

Project Rome developers have had a month to play with Project Rome for Android SDK, and we hope you are as excited about its capabilities as we are! In this month’s release, see what support we bring for app services.

How the UWP Community Toolkit helps Windows developers easily create well-designed and user-friendly apps

In August 2016, we introduced the open-source UWP Community Toolkit and we recently caught up with two developers who have used the toolkit to help create their apps. Check out what they had to say.

Download Visual Studio to get started.

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

How the UWP Community Toolkit helps Windows developers easily create well-designed and user-friendly apps

In August 2016, we introduced the open-source UWP Community Toolkit. It simplifies app development and enables developers to collaborate and contribute new capabilities on top of the Windows SDK for Windows 10. Developers can leverage the UWP Community Toolkit to build UWP apps for any Windows 10 device, including PC, mobile, Xbox, the Internet of Things and Hololens.

We recently caught up with two developers who have used the toolkit to help create their apps. Hermit Dave developed the Daily Mail Online app for Windows. It’s the official Windows app for the Daily Mail, a British newspaper. David Bottiau is a member of the Github organization, UWP Squad. He developed a Windows 10 app for TVShow Time, a site that helps you track your favorite TV shows.

We asked them how the UWP Community Toolkit helped them. Here’s what they had to say:

Tell us about the app you built with the help of the UWP toolkit.

Hermit Dave: I joined MailOnline in November 2015 to expand their mobile offering to Windows. After initial talks on Phone + Xbox offering, I suggested the UWP as a means to target all the platforms. After the initial release, Windows was deemed a suitable platform to try out UI + UX tweaks and the current version of the app kicked off in April 2016.

David Bottiau: I really needed a service that offers me a way to track my favorite TV series and animated shows, and I recently discovered the TVShow Time service. I use the beautiful TVShow Time web app really often and I wanted access to the information even faster and closer. So, I decided to bring the service to Windows 10 and Windows 10 Mobile using the existing API provided by TVShow Time team.

Why did you use the UWP Toolkit to help you create your app?

HD: Offline reading and availability is a major requirement for news apps and MailOnline strives to be among the best in news applications. News content often contains images and the toolkit’s ImageEx control proved itself very handy. In addition to offline images, the ability to control cache size is important. I previously created an offline image control and a cache mechanism and the toolkit provided a simple yet very elegant solution for both.

In addition to ImageEx and ImageCache, I have used the toolkit’s CacheBase to create VideoCache (offline animated previews) and ConfigurationCache to ensure they are kept up to date. The toolkit also contains helpers like HttpHelper which makes it easy to talk to Http data sources. I use this to make API calls as needed.

 DB: I started to use it to have access to a larger list of UI controls. Moreover, by simplifying/removing code for repetitive tasks, it helps me focus on the main part of app development. Either it is C# or XAML, I can work quickly and add new features faster.

Was the UWP Toolkit simple to use?

HD: Yes — image control can be replaced by ImageEx and can provide offline capability in one line of XAML code. Others, like creating custom cache helpers, reduces the effort to create similar functionality by a very huge margin.

DB: The UWP Toolkit is really simple to use because the different items (Controls, Helpers) are really well written. Moreover, the documentation goes straight to the point: You can read the markdown files (in GitHub) or as a website (http://docs.uwpcommunitytoolkit.com/).

And if it isn’t clear enough, you can try the sample app that gives a visual of what you can achieve with the code, a simple example that you can copy/paste in your app and a view of the documentation in case you want more explanation and you do not want to switch between the app and the website.

How was the UWP Toolkit helpful to you?

HD: The toolkit helped me to create offline images, custom cache, images, videos, configuration and trending data. Also, HttpHelper to make API calls. In one instance, I needed a card layout with drop shadow, and there’s a control for that, too.

DB: The UWP Toolkit gives me the ability to create beautiful apps in a short period of time and with the minimum effort (minimum code). The HamburgerMenu control allowed me to create the entry point of my application where users can navigate easily between the different pages. The AdaptativeGridView control is perfect for displaying a collection of items, like the collection of TV shows I am currently watching. The control is also useful because the item’s size will be adapted to the screen size; it has never been easier to create an app that looks great on a mobile, a tablet, a PC or even a TV.

The DropShadowPanel control can add some effects to the page/items of the page. I used it to highlight important elements like my TV show collection. Another effect is Blur; to make the app even more beautiful, I added a Blur background image for each show episode so users can get visuals along with the description of the episode. And the Helpers can reduce the amount of code needed to read/save data in the application, to register background tasks or even to push notifications.

To simplify, it is the combination of the current Windows 10 SDK and the added elements in UWP Community Toolkit that helped me to achieved a really beautiful app with the least effort.

Did you run into any glitches or issues in creating your app? If so, what were they?

HD: There are always glitches in code. I often refer to them as features. I contributed to the toolkit in places I thought it could do better.

Hermit Dave 

DB: I remember that there was a bug with the HamburgerMenu in v1.1. The property SelectedIndex could not be changed in code-behind. I really wanted to the ability to change the value and I did not understand why it wasn’t possible. So, I checked in the GitHub repository to see if they can fix that. The fact is that it was already fixed, and the update HamburgerMenu was then published in v1.2.

Did the UWP Toolkit make it easier for you to develop your apps and push them onto the app store? Why or why not?

HD: If I was hand-crafting the functionality in the toolkit — the bits I use — my effort would have increased significantly. The toolkit helps by offering building blocks that help you go from 0 to 60 in seconds.

DB: Whatever Windows 10 application you are doing, you’ll find at least one great thing in the toolkit. It could be a UI control, an animation/effect behavior, a simple method to access the LocalStorage, or a rich structure to create notifications.

Dave Bottiau

Do you have any suggestions on how the Toolkit could be improved?

HD: It’s a community toolkit. If there are things you think it can do better, do not hesitate to say so. Raise an issue, push a PR to make it better. Raise user voice for items you think the toolkit should add.

DB: I think developers still have go through a lot of effort to understand and create great animations. The Composition API is a really nice improvement to the Windows SDK and it will continue to grow and offer more features — but it should be also simple to make animations. The Toolkit added some great stuff like the Blur effect and the reorder grid animation. These are good enhancement but at the same time can be better. I heard that the Windows/XAML team is working on the simplification of the animation system and I am happy to hear it. I hope the Toolkit will be improved as well with this future evolution.

What advice would you give to other developers who want to develop UWP apps?

HD: The UWP SDK, the toolkit and Visual Studio make app development a joy. The toolkit allows you to re-use helpers, converters, extension methods, and components not in the SDK.

DB: If you want to develop a great UWP app and need some inspiration for User Interface, you can download the sample app of the UWP Community Toolkit and check UI Controls and Animations. Let the magic happen!

Are there any tips about the UWP toolkit that you can share with other devs? If so, what are they?

HD: The toolkit is much more than controls. There are tons of services, helpers, converters, extensions. There’s a library just for animations, which simplifies the use of Composition API.

DB: I’ll give only one tip: Please visit the GitHub repository and share with the other contributors.

New Year, New Dev – Windows IoT Core

To wrap up the “New Year, New Dev” blog series, we’ll go into using Windows 10 IoT Core and show how easy it is to start developing applications to deploy on IoT devices such as the Raspberry Pi 3. If you haven’t had a chance to read the first two posts in this series, you can find them here:

  1. New Year, New Dev: Sharpen your C# Skills
  2. New Year, New Dev: Developing your idea into a UWP app

Let’s begin by explaining what Windows 10 IoT Core actually is.

Windows 10 IoT Core is a version of Windows 10 that is optimized to run on smaller IoT devices, with or without a display, that allows you to build apps using the rich Universal Windows Platform (UWP). In fact, one of the goals for UWP was to enable the same application to run on PC, Xbox, HoloLens, Surface Hub or IoT Core devices. This enables you, as the developer, to create a core application with custom, adaptive user experiences appropriate for these devices.

In UWP there are extension APIs available to do things that are specific to your platform or device. For example, there are extension SDKs that give you access to unique features in Windows Mobile and on Surface Hubs. The extension SDKs for Windows IoT give you access to the APIs that can manage things like lights, sensors, motors, buses and much more.

Windows IoT Core supports a wide range of devices. Here are some examples:

Out of the box, you can use the UWP programming language you’re most comfortable with to build apps for IoT Core; these languages support both apps with or without a User Interface (aka “background applications”) and are shipped with Visual Studio by default.

  • C#
  • C++
  • JavaScript
  • Visual Basic

Alternatively, you can use one of the following IoT-focused languages; these languages can only be Background Applications.

  • C/C++ with Arduino wiring
  • js
  • Python

To use one of the “IoT focused” languages, you’ll need to install the Windows IoT Core Project Templates Visual Studio Extension. You can download these right from Visual Studio by going to Tools > Extensions and Updates. Or, you can install it separately by downloading it (go here for VS2015 or here for VS 2017).

For the purposes of today’s post, we will show how to install Windows 10 IoT on a device, how to install the tools you’ll need, and share several sample applications to get you started.

Getting started

The IoT Core team has made it very easy to get started by providing a flow-based Get Started page on the Windows IoT Core web page. Let me take you through the steps:

Step 1:

Go to the Get Started page here and you’ll be greeted with Step 1: Select Your Hardware. Choose the device you’re using, for this post, I’ll pick Raspberry Pi 3:

Step 2:

Select the route you want to install Windows 10 IoT Core. The most common option is to install it directly to a blank microSD Card:

Step 3:

Next, you’ll get to pick what version of Windows 10 IoT Core you want to use.

Important Note: Normally, you’d choose the non-insider version. Only choose the Insider Preview if you have the Insider Preview UWP SDK installed in Visual Studio 2017. The SDK version has to match the OS version on the device in order to deploy to it.

Step 4:

You’re done! Now click the Next button to navigate to the next phase of the setup process, getting the tools.

Installing the tools

During this phase of Getting Started, you’ll go through four high level steps. At the last step, you’ll be running an application on your IoT device! Here are the steps:

  1. Get the Tools
  2. Set up your device
  3. Set up Visual Studio
  4. Write your first app

1 – Get the Tools

In this step, you’ll download and install an amazing tool, the Windows 10 IoT Core Dashboard (approx. 54MB). This tool has a lot of features that makes using Windows 10 IoT core much easier than it has ever been. Once you’ve installed it, find it in your Start Menu’s apps list or search for “IoT Dashboard.”

You should now see a Start page like the following:

2 – Set up your device

With the Dashboard running, you can now set up your device by clicking the “Set up a new device” button. This makes the installation process very easy, just a few selections and a click of the button.

Here’s a screenshot of the “Set up a new device” page. Take note of the version of Windows IoT Core you’re installing. The current version is 14393, otherwise known as the Anniversary Update.

Once this is done, you’re good to go! Just remove the microSD card from your PC and insert it into the Raspberry Pi and boot it up.

Note: The first time boot-up will take longer than normal because it is performing an initial configuration. Be patient and do not power down during this. If you have any trouble and it doesn’t boot, just repeat the setup again to get a fresh start.

3 – Setup Visual Studio

Now let’s review what you have installed for tools.

If you do not have Visual Studio 2017 installed

You can download and install Visual Studio 2017 Community edition, for free, from here. This is not a “express” version, the Community edition is a feature-rich version of Visual Studio with all the tools you need to develop UWP applications and much more.

When running the installer, make sure you check off the Windows Platform Apps workload to get the Tools and SDK. Here’s what the installer looks like:

If you already have Visual Studio 2017 installed

If you already installed Visual Studio, then let’s check if you have the UWP tools installed. In Visual Studio, drop down the Help menu and select “About Visual Studio.” You’ll see a modal window pop out, inside the “Installed Products” are you can scroll down to check for “Visual Studio Tools for Universal Windows Apps”:

If you do not have them installed, you can use the standalone SDK installer to install them (see UWP SDK paragraph below) or rerun the Visual Studio 2017 installer and select the “Universal Windows Platform development” workload to install it.

Note: You can use Visual Studio 2015, just make sure you’re on Update 3 in addition to having the UWP SDK installed.

UWP SDK Version

Now that you have Visual Studio 2017 and the UWP tools installed, you’ll want to have the UWP SDK version that matches the Windows 10 IoT Core version you installed. As I mentioned earlier, the current version is 14393.

If you just installed Visual Studio, this would be the SDK version you have installed already. However, if you do need the 14393 SDK, you can get the standalone installer from here (note: if you’ve take the Windows IoT Core Insider Preview option, you can get the Insider Preview SDK from here).

TIP: Install the IoT Core Project templates

At this point, you can build and deploy to an IoT device simply because you have the UWP SDK installed. However, you can get a productivity boost by installing the IoT Core project templates. The templates contain project types such as: Background Application, Console Application and Arduino Wiring application. Download and install the templates from here.

Write your first app

At this point, your device and your developer environment is set up. Now it’s time to start writing apps! You may be familiar with the “Hello World” app as being the first application you build when trying a new language or platform. In the world of IoT, these are known as “Hello Blinky” apps.

There are several excellent Hello Blinky sample applications (when we say headless we mean with no user interface; you can still have a display connected to the system if you wish) to help you get a jump-start:

There are even more Microsoft authored samples located at our Windows IoT Dev Center.

You can also check out what the community has built on websites such as Hackster.io where developers open source their Windows 10 IoT Core projects, build specs and source code. There are hundreds of projects available; a few examples are:

There are unlimited possibilities with Window 10 IoT core, from home automation to industrial robotics or even environmental monitoring. Your app doesn’t have to be a complex system, you can build a UWP app to build a smart mirror, turn your hallway lights when motion is sensed, or use a light sensor to open your shades at dawn and close them at sunset!  We look forward to seeing what you build with Windows 10 IoT Core; send us a tweet @WindowsDev and share your creations with us!


Cognitive Services APIs: Vision

What exactly are Cognitive Services and what are they for? Cognitive Services are a set of machine learning algorithms that Microsoft has developed to solve problems in the field of Artificial Intelligence (AI). The goal of Cognitive Services is to democratize AI by packaging it into discrete components that are easy for developers to use in their own apps. Web and Universal Windows Platform developers can consume these algorithms through standard REST calls over the Internet to the Cognitive Services APIs.

The Cognitive Services APIs are grouped into five categories…

  • Vision—analyze images and videos for content and other useful information.
  • Speech—tools to improve speech recognition and identify the speaker.
  • Language—understanding sentences and intent rather than just words.
  • Knowledge—tracks down research from scientific journals for you.
  • Search—applies machine learning to web searches.

So why is it worthwhile to provide easy access to AI? Anyone watching tech trends realizes we are in the middle of a period of huge AI breakthroughs right now with computers beating chess champions, go masters and Turing tests. All the major technology companies are in an arms race to hire the top AI researchers.

Along with high profile AI problems that researchers know about, like how to beat the Turing test and how to model computer neural-networks on human brains, are discrete problems that developers are concerned about, like tagging our family photos and finding an even lazier way to order our favorite pizza on a smartphone. The Cognitive Services APIs are a bridge allowing web and UWP developers to use the resources of major AI research to solve developer problems. Let’s get started by looking at the Vision APIs.

Cognitive Services Vision APIs

The Vision APIs are broken out into five groups of tasks…

  • Computer Vision—Distill actionable information from images.
  • Content Moderator—Automatically moderate text, images and videos for profanity and inappropriate content.
  • Emotion—Analyze faces to detect a range of moods.
  • Face—identify faces and similarities between faces.
  • Video—Analyze, edit and process videos within your app.

Because the Computer Vision API on its own is a huge topic, this post will mainly deal with just its capabilities as an entry way to the others. The description of how to use it, however, will provide you good sense of how to work with the other Vision APIs.

Note: Many of the Cognitive Services APIs are currently in preview and are undergoing improvement and change based on user feedback.

One of the biggest things that the Computer Vision API does is tag and categorize an image based on what it can identify inside that image. This is closely related to a computer vision problem known as object recognition. In its current state, the API recognizes about 2000 distinct objects and groups them into 87 classifications.

Using the Computer Vision API is pretty easy. There are even samples available for using it on a variety of development platforms including NodeJS, the Android SDK and the Swift SDK. Let’s do a walkthrough of building a UWP app with C#, though, since that’s the focus of this blog.

The first thing you need to do is register at the Cognitive Services site and request a key for the Computer Vision Preview (by clicking on one of the “Get Started for Free” buttons.

Next, create a new UWP project in Visual Studio and add the ProjectOxford.Vision NuGet package by opening Tools | NuGet Package Manager | Manage Packages for Solution and selecting it. (Project Oxford was an earlier name for the Cognitive Services APIs.)

For a simple user interface, you just need an Image control to preview the image, a Button to send the image to the Computer Vision REST Services and a TextBlock to hold the results. The workflow for this app is to select an image -> display the image -> send the image to the cloud -> display the results of the Computer Vision analysis.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <RowDefinition Height="9*"/>
        <RowDefinition Height="*"/>
    <Border BorderBrush="Black" BorderThickness="2">
    <Image x:Name="ImageToAnalyze" />
    <Button x:Name="AnalyzeButton" Content="Analyze" Grid.Row="1" Click="AnalyzeButton_Click"/>
    <TextBlock x:Name="ResultsTextBlock" TextWrapping="Wrap" Grid.Column="1" Margin="30,5"/>

When the Analyze Button gets clicked, the handler in the Page’s code-behind will open a FileOpenPicker so the user can select an image. In the ShowPreviewAndAnalyzeImage method, the returned image is used as the image source for the Image control.

readonly string _subscriptionKey;

public MainPage()
    //set your key here
    _subscriptionKey = "b1e514ef0f5b493xxxxx56a509xxxxxx";

private async void AnalyzeButton_Click(object sender, RoutedEventArgs e)
    var openPicker = new FileOpenPicker
        ViewMode = PickerViewMode.Thumbnail,
        SuggestedStartLocation = PickerLocationId.PicturesLibrary
    var file = await openPicker.PickSingleFileAsync();

    if (file != null)
        await ShowPreviewAndAnalyzeImage(file);

private async Task ShowPreviewAndAnalyzeImage(StorageFile file)
    //preview image
    var bitmap = await LoadImage(file);
    ImageToAnalyze.Source = bitmap;

    //analyze image
    var results = await AnalyzeImage(file);

    //"fr", "ru", "it", "hu", "ja", etc...
    var ocrResults = await AnalyzeImageForText(file, "en");

    //parse result
    ResultsTextBlock.Text = ParseResult(results) + "nn " + ParseOCRResults(ocrResults);

The real action happens when the returned image then gets passed to the VisionServiceClient class included in the Project Oxford NuGet package you imported. The Computer Vision API will try to recognize objects in the image you pass to it and recommend tags for your image. It will also analyze the image properties, color scheme, look for human faces and attempt to create a caption, among other things.

private async Task<AnalysisResult> AnalyzeImage(StorageFile file)

    VisionServiceClient VisionServiceClient = new VisionServiceClient(_subscriptionKey);

    using (Stream imageFileStream = await file.OpenStreamForReadAsync())
        // Analyze the image for all visual features
        VisualFeature[] visualFeatures = new VisualFeature[] { VisualFeature.Adult, VisualFeature.Categories
            , VisualFeature.Color, VisualFeature.Description, VisualFeature.Faces, VisualFeature.ImageType
            , VisualFeature.Tags };
        AnalysisResult analysisResult = await VisionServiceClient.AnalyzeImageAsync(imageFileStream, visualFeatures);
        return analysisResult;

And it doesn’t stop there. With a few lines of code, you can also use the VisionServiceClient class to look for text in the image and then return anything that the Computer Vision API finds. This OCR functionality currently recognizes about 26 different languages.

private async Task<OcrResults> AnalyzeImageForText(StorageFile file, string language)
    //language = "fr", "ru", "it", "hu", "ja", etc...
    VisionServiceClient VisionServiceClient = new VisionServiceClient(_subscriptionKey);
    using (Stream imageFileStream = await file.OpenStreamForReadAsync())
        OcrResults ocrResult = await VisionServiceClient.RecognizeTextAsync(imageFileStream, language);
        return ocrResult;

Combining the image analysis and text recognition features of the Computer Vision API will return results like that shown below.

The power of this particular Cognitive Services API is that it will allow you to scan your device folders for family photos and automatically start tagging them for you. If you add in the Face API, you can also tag your photos with the names of family members and friends. Throw in the Emotion API and you can even start tagging the moods of the people in your photos. With Cognitive Services, you can take a task that normally requires human judgement and combine it with the indefatigability of a machine (in this case a machine that learns) in order to perform this activity quickly and indefinitely on as many photos as you own.

Wrapping Up

In this first post in the Cognitive API series, you received an overview of Cognitive Services and what it offers you as a developer. You also got a closer look at the Vision APIs and a walkthrough of using one of them. In the next post, we’ll take a closer look at the Speech APIs. If you want to dig deeper on your own, here are some links to help you on your way…

Announcing Project Rome Android SDK

Project Rome Overview

Project Rome is a platform for creating experiences that transcend a single device and driving up user engagement – empowering a developer to create human-centric scenarios that move with the user and blur the lines between their devices regardless of form factor or platform.

We first shipped Project Rome capabilities for Remote Launch and Remote App Services in Windows 10 Anniversary Update.

Project Rome Android SDK

Today we are excited to announce the release of the Android version of the Project Rome SDK.  This Android SDK works both with Java and with Xamarin.

You can download the Project Rome SDK for Android here.

Capabilities exposed through the Project Rome Android SDK

Let’s take an example of an app that might need this capability. In the last blog post, we had talked about Paul and his Contoso Music App. In that scenario, Paul had his UWP app which was a music player, and he wanted to make sure that his users had a way to communicate between his app as they moved between devices.

If we take that example further, we can imagine that Paul has a Contoso Music App for Android as well. Paul notices that most of his users use his app on Windows, and on Android. These are the same users logged in with the same MSA. Paul wants to make sure that his users’ experience translates well when they move between their Android and Windows devices. Paul also notices that many of his Windows users run his UWP app on their Xbox at home.

With the Project Rome Android SDK Paul can use:

  1. The Remote Systems API to discover other Windows devices that the user owns. The Remote Systems APIs will allow the Contoso Music app to discover these devices on the same network, and through the cloud.
  2. Once discovered, the Remote Launch API will launch his app on another Windows device.
  3. Once his app is launched on the other device, Paul can use remote app services to control his app running on Windows from his Android device. We are not releasing this functionality in the release today, but it is coming soon in a future release of the Android SDK.

Thus, using the Project Rome Android SDK, Paul can bridge the experience gap that exists as his users move between their Android and Windows devices.

Capability Walkthrough

We will briefly walk through both a Java and Xamarin example.  We have full examples of UWP here: https://github.com/Microsoft/Windows-universal-samples/tree/dev/Samples/RemoteSystems and Android here: https://github.com/Microsoft/project-rome/tree/master/Project%20Rome%20for%20Android%20(preview%20release).

Click on the image below to see the Android Sample app in action:

Using Java

Here are snippets in Java from our sample of how you’d use the Project Rome Android SDK.  The first step to get going with the Android SDK is to initialize the platform, where you’ll handle authentication.

Platform.initialize(getApplicationContext(), new IAuthCodeProvider() {
    public void fetchAuthCodeAsync(String oauthUrl, Platform.IAuthCodeHandler authCodeHandler) {
        performOAuthFlow(oauthUrl, authCodeHandler);            

Using OAuth you’ll retrieve an auth_code via a WebView:

public performOAuthFlow (String oauthUrl, Platform.IAuthCodeHandler authCodeHandler) {

    WebView web;
    web = (WebView) _authDialog.findViewById(R.id.webv);
    web.setWebChromeClient(new WebChromeClient());

    // Get auth_code

    WebViewClient webViewClient = new WebViewClient() {
        boolean authComplete = false;
        public void onPageFinished(WebView view, String url) {
            super.onPageFinished(view, url);

            if (url.startsWith(REDIRECT_URI)) {
                Uri uri = Uri.parse(url);
                String code = uri.getQueryParameter("code");
                String error = uri.getQueryParameter("error");
                if (code != null && !authComplete) {
                authComplete = true;
                } else if (error != null) {
                  // Handle error case                                    }


Now, discover devices:

RemoteSystemDiscovery.Builder discoveryBuilder;
discoveryBuilder = new RemoteSystemDiscovery.Builder().setListener(new IRemoteSystemDiscoveryListener() {
    public void onRemoteSystemAdded(RemoteSystem remoteSystem) {
        Log.d(TAG, "RemoveSystemAdded = " + remoteSystem.getDisplayName());
        devices.add(new Device(remoteSystem));
        // Sort devices
        Collections.sort(devices, new Comparator<Device>() {
            public int compare(Device d1, Device d2)
                return d1.getName().compareTo(d2.getName());

Remote launch a URI to your device:

new RemoteSystemConnectionRequest(remoteSystem)
String url = "http://msn.com"

new RemoteLauncher().LaunchUriAsync(connectionRequest,
        new IRemoteLauncherListener() {
            public void onCompleted(RemoteLaunchUriStatus status) {


Using Xamarin

Similarly, here are snippets in Xamarin.

You will first initialize the Connected Devices Platform:

Platform.FetchAuthCode += Platform_FetchAuthCode;
var result = await Platform.InitializeAsync(this.ApplicationContext, CLIENT_ID);

Using OAuth you’ll retrieve an auth_code:

private async void Platform_FetchAuthCode(string oauthUrl)
    var authCode = await AuthenticateWithOAuth(oauthUrl);

Now, discover devices:

private RemoteSystemWatcher _remoteSystemWatcher;
private void DiscoverDevices()
    _remoteSystemWatcher = RemoteSystem.CreateWatcher();
    _remoteSystemWatcher.RemoteSystemAdded += (sender, args) =>
        Console.WriteLine("Discovered Device: " + args.P0.DisplayName);

Finally, connect and launch URIs using LaunchUriAsync:

private async void RemoteLaunchUri(RemoteSystem remoteSystem, Uri uri)
    var launchUriStatus = await RemoteLauncher.LaunchUriAsync(new RemoteSystemConnectionRequest(remoteSystem), uri);

If you want to see the Xamarin code, please head over to https://github.com/Microsoft/project-rome/tree/master/xamarin.

Wrapping Up

Project Rome breaks down barriers across all Windows devices and creates experiences that are no longer constrained to a single device. With today’s announcement, we are bringing this capability to Android devices as well. The Remote Systems API available in Windows 10 is a key piece of Project Rome that provides exposure of the device graph and the ability to connect and command – this is fundamental for driving user engagement and productivity for applications across all devices.

To learn more and browse sample code, including the snippets shown above, please check out the following articles and blog posts:

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Using SQLite databases in UWP apps

For many developers, SQLite has become the preferred client-side technology for data storage. It is a server-less, embedded, open-source database engine that satisfies most local data access scenarios. There are numerous advantages that come with its use, many of which are explained in the SQLite about page.

Since the Windows 10 Anniversary Update (Build 14393), SQLite has also shipped as part of the Windows SDK. This means that when you are building your Universal Windows Platform (UWP) app that runs across the different Windows device form factors, you can take advantage of the SDK version of SQLite for local data storage. This comes with some advantages:

  • Your application size reduces since you don’t download your own SQLite binary and package it as part of your application
    • Note: Microsoft.Data.SQLite (used in the example below) currently has an issue where both SQLite3.dll and WinSQLite.dll are loaded in memory whenever a .NET Native version of your application is run. This is a tracked issue that will be addressed in subsequent updates of the library.
  • You can depend on the Windows team to update the version of SQLite running on the operating system with every release of Windows.
  • Application load time has the potential to be faster since the SDK version of SQLite will likely already be loaded in memory.

Below, we provided a quick coding example on how to consume the SDK version of SQLite in your C# application.

Note: Since the Windows SDK version of SQLite has only been available since the Windows 10 Anniversary Update, it can only be used for UWP apps targeting Build 14393 or higher.

C# Example

In this example, we will build a UWP application that will allow users to input text into an app local database. The goal is to provide developers with concise guidance on how to use the SQLite binary that’s shipped as part of the Windows SDK. Therefore this code sample is meant to be as simple as possible, so as to provide a foundation that can be further built upon.

An example of the end product is shown below:

SQLite C# API Wrappers

As mentioned in the SQLite documentation, the API provided by SQLite is fairly low-level and can add an additional level of complexity for the developer. Because of this, many open-source libraries have been produced to act as wrappers around the core SQLite API. These libraries abstract away a lot of the core details behind SQLite, allowing developers to more directly deal with executing SQL statements and parsing the results.

For SQLite consumption across Windows, we recommend the open-source Microsoft.Data.Sqlite library built by the ASP.NET team. It is actively being maintained and provides an intuitive wrapper around the SQLite API. The rest of the example assumes use of the Microsoft.Data.Sqlite library.

Alternative SQLite wrappers are also linked in the “Additional Resources” section below.

Visual Studio set-up

The packages used in this sample have a dependency on NuGet version 3.5 or greater. You can check your version of NuGet by going to HelpAbout Microsoft Visual Studio and looking through the Installed Products for NuGet Package Manager. You can go to the NuGet download page and grab the version 3.5 VSIX update if you have a lower version.

Note: Visual Studio 2015 Update 3 is pre-installed with NuGet version 3.4, and will likely require an upgrade. Visual Studio 2017 RC is installed with NuGet version 4.0, which works fine for this sample.

Adding Microsoft.Data.Sqlite and upgrading the .NET Core template

The Microsoft.Data.Sqlite package relies on at least the 5.2.2 version of .NET Core for UWP, so we’ll begin by upgrading this:

  • Right click on ReferencesManage NuGet Packages
  • Under the Installed tab, look for the Microsoft.NETCore.UniversalWindowsPlatform package and check the version number on the right-hand side. If it’s not up to date, you’ll be able to update to version 5.2.2 or higher.

Note: Version 5.2.2 is the default for VS 2017 RC. Therefore, this step is not required if you are using this newest version of Visual Studio.

To add the Microsoft.Data.Sqlite NuGet package to your application, follow a similar pattern:

  • Right-click on ReferencesManage NuGet Packages
  • Under the Browse tab, search for the Microsoft.Data.Sqlite package and install it.


Application User Interface

We’ll start off by making a simple UI for our application so we can see how to add and retrieve entries from our SQLite database.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <TextBox Name="Input_Box"></TextBox>
        <Button Click="Add_Text">Add</Button>
        <ListView Name="Output">
                    <TextBlock Text="{Binding}"/>

There are three important parts to our application’s interface:

  1. A text box that allows us to take text from the user.
  2. A button linked to an event for pulling the text and placing it in the SQLite database.
  3. An ItemTemplate to show previous entries in the database.

Code Behind for Application

In the App.xaml.cs and MainPage.xaml.cs files generated by Visual Studio, we’ll start by importing the Microsoft.Data.Sqlite namespaces that we’ll be using.

using Microsoft.Data.Sqlite;
using Microsoft.Data.Sqlite.Internal;

Then as part of the app constructor, we’ll run a “CREATE TABLE IF NOT EXISTS” command to guarantee that the SQLite .db file and table are created the first time the application is launched.

public App()
    this.Suspending += OnSuspending;
    SqliteEngine.UseWinSqlite3(); //Configuring library to use SDK version of SQLite
    using (SqliteConnection db = new SqliteConnection("Filename=sqliteSample.db"))
        String tableCommand = "CREATE TABLE IF NOT EXISTS MyTable (Primary_Key INTEGER PRIMARY KEY AUTOINCREMENT, Text_Entry NVARCHAR(2048) NULL)";
        SqliteCommand createTable = new SqliteCommand(tableCommand, db);
        catch (SqliteException e)
            //Do nothing

There are couple of points worth noting with this code:

  1. We make a call to SqliteEngine.UseWinSqlite3() before making any other SQL calls, which guarantees that the Microsoft.Data.Sqlite framework will use the SDK version of SQLite as opposed to a local version.
  2. We then open a connection to a SQLite .db file. The name of the file passed as a String is your choice, but should be consistent across all SqliteConnection objects. This file is created on the fly the first time it’s called, and is stored in the application’s local data store.
  3. After establishing the connection to the database, we instantiate a SqliteCommand object passing in a String representing the specific command and the SqliteConnection instance, and call execute.
  4. We place the ExecuteReader() call inside a try-catch block. This is because SQLite will always throw a SqliteException whenever it can’t execute the SQL command. Not getting the error confirms that the command went through correctly.

Next, we’ll add code in the View’s code-behind file to handle the button-clicked event. This will take text from the text box and put it into our SQLite database.

private void Add_Text(object sender, RoutedEventArgs e)
    using (SqliteConnection db = new SqliteConnection("Filename=sqliteSample.db"))

        SqliteCommand insertCommand = new SqliteCommand();
        insertCommand.Connection = db;
        //Use parameterized query to prevent SQL injection attacks
        insertCommand.CommandText = "INSERT INTO MyTable VALUES (NULL, @Entry);";
        insertCommand.Parameters.AddWithValue("@Entry", Input_Box.Text);        

        catch (SqliteException error)
            //Handle error
    Output.ItemsSource = Grab_Entries();

As you can see, this isn’t drastically different than the SQLite code explained in the app’s constructor above. The only major deviation is the use of parameters in the query so as to prevent SQL injection attacks. You will find that commands that make changes to the database (i.e. creating tables, or inserting entries) will mostly follow the same logic.

Finally, we go to the implementation of the Grab_Entries() method, where we grab all the entries from the Text_Entry column and fill in the XAML template with this information.

private List<String> Grab_Entries()
    List<String> entries = new List<string>();
    using (SqliteConnection db = new SqliteConnection("Filename=sqliteSample.db"))
        SqliteCommand selectCommand = new SqliteCommand("SELECT Text_Entry from MyTable", db);
        SqliteDataReader query;
            query = selectCommand.ExecuteReader();
        catch(SqliteException error)
            //Handle error
            return entries;
    return entries;

Here, we take advantage of the SqliteDataReader object returned from the ExecuteReader() method to run through the results and add them to the List we eventually return. There are two methods worth pointing out:

  1. The Read() method advances through the rows returned back from the executed SQLite command, and returns a boolean based on whether you’ve reached the end of the query or not (True if there are more rows left, and False if you’ve reached the end).
  2. The GetString() method returns the value of the specified column as a String. It takes in one parameter, an int that represents the zero-based column ordinal. There are similar methods like GetDataTime() and GetBoolean() that you can use based on the data type of the column that you are dealing with.
    1. The ordinal parameter isn’t as relevant in this example since we are selecting all the entries in a single column. However, in the case where multiple columns are part of the query, the ordinal represents the column you are pulling from. So if we selected both Primary_Key and Text_Entry, then GetString(0) would return the value of Primary_Key String and GetString(1) would return the value of Text_Entry as a String.

And that’s it! You can now build your application and add any text you like into your SQLite database. You can even close and open your application to see that the data persists.

A link to the full code can be found at: https://github.com/Microsoft/windows-developer-blog-samples/tree/master/Samples/SQLiteSample

Moving Forward

There are plenty of additions that you can make to tailor this sample to your needs:

  • Adding more tables and more complicated queries.
  • Providing more sanitation over the text entries to prevent faulty user input.
  • Communicating with your database in the cloud to propagate information across devices.
  • And so much more!

What about Entity Framework?

For those developers looking to abstract away particular database details, Microsoft’s Entity Framework provides a great model that lets you work at the “Object” layer as opposed to the database access layer. You can create models for your database using code, or visually define your model in the EF designer. Then Entity Framework makes it super-easy to generate a database from your defined object model. It’s also possible to map your models to existing databases you may have already created.

SQLite is one of many database back-ends that Entity Framework is configured to work with. This documentation provides an example to work from.


From embedded applications for Windows 10 IoT Core to a cache for enterprise relations database server (RDBS) data, SQLite is the premier choice for any application that needs local storage support. SQLite’s server-less and self-contained architecture makes it compact and easy to manage, while its tried and tested API surface coupled with its massive community support provides additional ease of use. And since it ships as part of Windows 10, you can have peace of mind, knowing that you’re always using an up-to-date version of the binary.

As always, please leave any questions in the comments section, and we’ll try our best to answer them. Additional resources are also linked below.

Additional Resources