Tag Archives: Inking

Calling all game devs: The Dream.Build.Play 2017 Challenge is Here!

Dream.Build.Play is back! The long-running indie game development contest was on hiatus for a few years, so it’s high time for it to make a resounding return.

The Dream.Build.Play 2017 Challenge is the new contest: It just launched on June 27, and it challenges you to build a game and submit it by December 31 in one of four categories. We’re not super-picky — you can choose the technology to use just so long as it falls into one of the challenges and that you publish it as a Universal Windows Platform (UWP) game. It’s up to you to build a quality game that people will line up to play.

The four categories are:

Cloud-powered game – Grand Prize: $100,000 USD

Azure Cloud Services hands you a huge amount of back-end power and flexibility, and we think it’s cool (yes, we’re biased). So, here’s your shot of trying Azure out and maybe even win big. Build a game that uses Azure Cloud Services on the backend, like Service Fabric, CosmosDB, containers, VMs, storage and Analytics. Judges will give higher scores to games that use multiple services in creative ways — and will award bonus points for Mixer integration.

PC game – Grand Prize: $50,000 USD

Building on Windows 10, for Windows 10? This is the category for you. Create your best UWP game that lives and breathes on Windows 10 and is available to the more than 450 million users through the Windows Store. It’s simple: Create a game with whatever technology you want and publish it in the Windows Store. We’ll look favorably on games that add Windows 10 features such as Cortana or Inking because we really want to challenge you.

Mixed Reality game – Grand Prize: $50,000

Oh, so you want to enhance this world you live in with something a little…augmented? Virtual? Join us in the Mixed Reality challenge and build a volumetric experience that takes advantage of 3D content in a virtual space. You’ll need to create your game for Windows Mixed Reality, but you can use technology like Unity to get you kickstarted. Oh, and don’t forget the audio to really immerse us in your world.

Console game – Grand Prize: $25,000

Console gamers unite! Want to try your hand at building a game for Xbox? This category is your jam. Your UWP game will be built for the Xbox One console family and must incorporate Xbox Live Creators Program with at least Xbox Live presence. Consideration will be given for games that incorporate more Xbox Live services such as leaderboards and statistics.

There are some important dates to be aware of:

  • June 27: Competition opens for registration
  • August 2: Team formation and game submission period opens
  • December 31: Game submission period closes
  • January 2018: Finalists announced
  • March 2018: Winners awarded

We have big things planned for you. Maybe some additional contests and challenges, maybe some extra-cool prizes for the finalists, maybe some extra-cool interviews and educational materials. Once you register, we’ll keep you updated via email, but also keep an eye on our Windows Developer social media accounts.

As I mentioned earlier, you can pretty much use whatever technology you want. Create something from the ground up in JavaScript or XAML or C++ and DirectX. Leverage one of our great middleware partners like Unity, GameMaker, Cocos2D or Monogame. Or do a bit of both – do your own thing and incorporate Mixer APIs into it, Vungle or any one (or more) of our other partners. The biggest thing we want from you is a fun game that’s so enjoyable for us to play that we forget we’re judging it!

Speaking of that, you might be wondering how we judge the games. We have four “big bucket” criteria for you to aim for:

  • Fun Factor – 40%: Bottom line – your game needs to be fun. That doesn’t mean it has to be cutesy or simple. Fun comes in many forms, but we can’t forget what we’re aiming for here – a great game. Take us for a ride!
  • Innovation – 30%: And while you’re taking us on that ride, surprise us! We’re not looking for a clone of an existing game or a tired theme that has been done a bazillion times before. Mash-up two genres. Take a theme and turn it on its head. Don’t restrict your innovation to the game, but also the technology you’re using and how you’re using it. Think outside the box when you incorporate Windows features, or how you can creatively use a service like Mixer.
  • Production Quality – 20%: Games have to be fun and we want them to be innovative, but if they don’t run, then they’re just not ready to be called a game. This scoring criterion is all about making sure your framerate is right, you have audio where you should, you’ve catered for network instability and more. Give us every opportunity to get to your game and enjoy it the way you intended.
  • Business Viability/Feasibility – 10%: And of course, what’s your plan to engage your gaming customers? Do you have a good revenue-generating plan (e.g., in-app purchases, premium charges, marketing, rollouts, etc.)? That’s stuff you might not normally think about, but we’re gonna make you. Because we care.

If you want to get started with UWP game development, you can try our Game Development Guide.

Want more? Check out the introductory .GAME episode here:

So, what are you waiting for? Get in there and register!

See What’s New with Windows Ink in the Windows 10 Creators Update

Windows Ink is about transforming the way we think about computers, from a tool that is great at getting things done, to one that harnesses your personality and your emotions into the things you create. It’s about bringing back the human aspects that a mouse and keyboard (and even touch) cannot express fully, it’s about making personal computers more personal, and an extension of yourself, not just a tool. We want you to feel empowered to create from the moment you pick up the pen, and have the confidence that Windows understands you, knows what you want to do – by understanding your handwriting, your words and your expression. This is the journey we’re on.

With the Creators Update, Windows Ink is now better than ever! When used with the Surface Dial, it allows you to discover new ways to work and interact with Windows. With Windows Ink, we continue to make it possible for you to do more than with pen and paper. Applications like Photos and Maps have added incredible inking functionality in the last year, and continue to evolve and expand. With Paint 3D in the Creators Update, Windows Ink can now create 3D objects! As we evolve what Ink means to users, we’re also introducing new Smart Ink capabilities to Windows Ink. These capabilities allows developers to understand the ink that is being laid down by the user, using AI to help create, connect and complete user actions on ink. We’ve also improved and added features to the building blocks for Windows Ink, introducing new stencils and adding tilt support to create a richer drawing experience.

Devices that support the Pen on Windows have also doubled in the last year, and is on track to double again in the next year! We’re seeing high demand not just for devices, but also for applications that support ink. To make it easier to find compatible pens, Wacom has partnered with us to develop the Bamboo Ink Pen. This pen will be in market in summer and supports almost all Windows PCs that are pen-capable. It features the Microsoft Pen Protocol (MPP), which is based on Surface Pen technology. In addition, we are also excited that the Surface Dial is now available in more countries, like Australia, Canada and New Zealand, giving more people an opportunity to try this incredible new input device. In addition, new hardware from our OEM partners, like the Dell Canvas 27, are shipping soon and takes advantage of the same RadialController APIs that are used for the dial. As a developer building for the Surface Dial today, it means that you are ready for all the new hardware that our OEM partners will bring to the ecosystem.

The progress we’ve made with Windows Ink would not have been possible without the feedback and passion you developers bring to us. With over a thousand inking applications in the store and growing everyday, with well over half of the top 10 paid store apps being ink apps, there is incredible enthusiasm and interest in this space. This is an incredible opportunity that you have embraced with us, and it inspires us to do more in each Windows release.

What’s new with Windows Ink platform?

Ink is the ultimate way humans can express themselves, it opens up new opportunities for application developers to differentiate, and helps make their applications stand out. From the latest fads like adult coloring books to simple games like tic-tac-toe, to applications that help you organize your life, there is just so much opportunity to build the next big thing in the inking space. We also know that people who use Windows Ink are more satisfied with their experience, what they look for, and buy more inking applications. From the platform perspective, we have 2 ways that we help developers:

  • Make it as easy and quick for a developer to add inking into their application by providing controls that can be dropped in quickly into any application and get Windows Ink support.
  • Provide the most flexible platform building blocks for developers to innovate upon. This gives you the flexibility to choose where to start developing for Windows Ink.

Introducing Smart Ink

Let’s start with a new building block that developers have access to in the Creators Update. Introducing Ink Analysis, this is the first of our family of Smart Ink capabilities that we are bringing to the platform. Smart Ink brings AI technology to not just understand what you write, but also helps connect the dots to what you may want to do. With Ink Analysis, it starts simple, with recognizing shapes and making that square you drew more perfect, but it can also do much more, like understanding you wrote words in squares and making it into an org chart using understanding about your organization. Our goal is to understand user intent and empower developers to turn it into rich digital constructs, as well as to leverage understanding from all parts of the system. Ink Analysis allows any developer to understand the ink they capture, whether it is handwriting, shapes, phone numbers, stock symbols, lists, document structure and more.  This is the same technology we debuted in Sticky Notes in the Window 10 Anniversary Update, and now it’s available for you to use! We can’t wait to see what you can do with this technology.

Here is an example of how to use Ink Analysis to recognize shapes.  For this snippet, we’ll use DirectInk to handle rendering the ink strokes.  Start by initializing an InkAnalyzer and connecting it with InkPresenter:

private void Initialize()
{
    inkAnalyzer = new InkAnalyzer();
    inkCanvas.InkPresenter.StrokesCollected += InkPresenter_StrokesCollected;
    inkCanvas.InkPresenter.StrokesErased += InkPresenter_StrokesErased;
}

// Whenever the user draws a new stroke, you copy the stroke into Ink Analyzer’s stroke collection
private void InkPresenter_StrokesCollected(InkPresenter sender, InkStrokesCollectedEventArgs args)
{
    inkAnalyzer.AddDataForStrokes(args.Strokes);
}

// When a stroke is erased in InkCanvas, remove the same stroke from Ink Analyzer's collection.
private void InkPresenter_StrokesErased(InkPresenter sender, InkStrokesErasedEventArgs args)
{
    foreach (var stroke in args.Strokes)
    {
        inkAnalyzer.RemoveDataForStroke(stroke.Id);
    }
}

Next you want to feed strokes to the analyzer. Commonly this is done via explicit user action (e.g. the user clicks a button) or after the user has been idle for a while.

inkAnalyzer.AnalyzeAsync();

The result is a tree representation of the whole document with different kinds of nodes, such as paragraph, line, list, word, and drawing. If for instance you want to find all the shapes in the ink, you can with the code below:

IReadOnlyList<IInkAnalysisNode> drawings = inkAnalyzer.AnalysisRoot.FindNodes(InkAnalysisNodeKind.InkDrawing);
foreach (IInkAnalysisNode drawing in drawings)
{
    var shape = (InkAnalysisInkDrawing)drawing;
    switch (shape.DrawingKind)
    {
        case InkAnalysisDrawingKind.Circle:
            // The user drew a circle. You can replace it with a perfect circle that goes through shape.Points.
            break;
        case InkAnalysisDrawingKind.Rectangle:
            // The user drew a rectangle. 
            // You can find the full list of supported shapes here.
            break;
    }
} 

If you want to learn more about Ink Analysis, you can watch the BUILD 2017 recorded video Enable Natural Pen Interaction by Using Ink Analysis to Better Understand Users’ Ink, download the Ink Analysis sample on GitHub or check out the Ink Analysis API Reference.

An improved Ink Toolbar

In the Anniversary Update we created a customizable set of inking tools, Ink Toolbar and Ink Canvas, that any developer can add to their own application with only two lines of markup.

<InkCanvas x:Name=“myInkCanvas”/>
<InkToolbar TargetInkCanvas=“{x:Bind myInkCanvas}”/>

Many of Microsoft’s first party applications have incorporated the inking tools to create engaging user experiences. For example, Photos added a calligraphy pen and the ability to draw on any photo in the gallery. Maps added a feature that lets you measure the distance of a route drawn on the map. Edge browser added inking on webpages. It has never been easier to add Windows Ink to your applications.

In the Creators Update, we continue our commitment to improving these controls! If you already use them in your applications, these improvements will benefit you with no additional work!

In response to users, the Creators Update introduces a new stencil, the protractor. This new stencil makes it easy for you to draw circles and arcs of any size. When drawing an arc, the protractor displays a readout that tells you the precise angle of the arc. You can also resize the stencil with just a pinch/zoom gesture with your fingers.

We’ve also made the ruler stencil better! Like the protractor, it now provides an angle readout that shows the ruler’s angle with the horizontal line. The ruler also snaps to 0, 45 and 90 degrees for easy access to the most common angles being used by our users.

You asked for an improve stroke preview in the Ink Toolbar, and in the Creators Update, we have it! We’re also make changes in the Ink Toolbar to work better with High Contrast themes, by automatically showing only colors that meet visibility requirements for the current user profile.

New Exciting Inking Capabilities


Today we announced the new Surface Pro and the new Surface Pen. Together they enable the next generation of inking capabilities that truly make writing digitally as natural as pen on paper. Here are some of the highlights:

  • Low latency Ink that virtually eliminates lag when you write
  • Tilt support to capture an additional dimension in digital inking
  • Ink that captures the entire spectrum of your expression with 4,096 levels of pressure sensitivity
  • Effortless inking with half the activation force required to being inking

Our customers have asked us for these capabilities, and they are finally here! From a developer perspective, if you already use the Windows Ink platform, all these capabilities show up in your application automatically! There are no changes required, and you are ready for the new Surface Pro, with the new Surface Pen.

Low latency Inking is a unique addition to Windows Ink. It is the result of a close partnership between hardware and software. The Pixelsense Accelerator chip in the new Surface Pro, is the first device to run Windows Ink acceleration code natively on hardware. This is how we achieve a new milestone in inking, virtually eliminating lag between the pen tip and the ink that flows out of it, creating the most natural writing experience with Windows Ink.

Tilt is another great addition to the Inking experience. The great news is, in addition to the new Surface Pro/Pen supporting this new capability, Wacom Pens that feature tilt will also “just work”! Tilt allows Windows Ink to model natural pencil sketching that response to the tilt of the pen. This support is now built into the pencil brush on the Ink Toolbar. In the above diagrams, we demonstrate how the pencil brush can be used to shade lines (on the left) and to draw arcs of varying thickness depending on the degree of tilt (on the right).

As mentioned above, tilt integration happens automatically if you use the Ink Toolbar. However, if you are not using the Windows Ink platform to render ink, and want to build your own brush that responds to tilt, you still can! There are two properties, TiltX and TiltY (respective angle of tilt against each axis of the screen plane) which are included with pointer input messages. You can access the tilt values from the PointerPointProperties included with Pointer input events, or the POINTER_PEN_INFO struct from WM_POINTER input.

These improvements automatically show up on any application that uses the Windows Ink controls, and you can be confident that we’ll continue to evolve and improve them in each release of Windows.

What’s new with Surface Dial and RadialController?

The Surface Dial introduces a new input paradigm to computing. It was designed alongside the Windows Ink experience, allowing it to truly shine when used together with a Pen. We’ve seen many experiences built to harness the new capabilities the Surface Dial brings, and are also seeing new hardware emerging, and adopting the RadialController standard. In response to your feedback, we’ve added more capabilities to the RadialController experience in the Creators Update.

First off, are some new button events for RadialControllers. These new events, Pressed and Released, combined with existing events for rotation and screen contact, will allow you to track complex interactions such as press-and-rotate or press-and-move. The example below illustrates a simple way to capture a press-and-rotate action.

_radialController.ButtonPressed += OnButtonPressed;
_radialController.ButtonReleased += OnButtonReleased;

private void OnRotationChanged(RadialController sender,
                               RadialControllerRotationChangedEventArgs args)
{
    if (args.IsButtonPressed)
    {
        /* When button is pressed, you can do modal interactions, fine-grained changes */
    }
    else
    {
        /* Otherwise, do the normal rotation behavior */
    }
}
private void SendHaptics(SimpleHapticsController hapticController)
{
    var feedbacks = hapticController.SupportedFeedback;
    foreach (SimpleHapticsControllerFeedback feedback in feedbacks)
    {
        if (feedback.Waveform ==
                    KnownSimpleHapticsControllerWaveforms.Click)
        {
            hapticController.SendHapticFeedback(feedback);
            return;
        }
    }
}

You also now have access to the Haptics engine in the Surface Dial hardware. Using SimpleHapticsController—a new object that uses the HID Simple Haptics specification—you have the power to directly send feedback to the user. You can use this to customize the feel of your menu, adding a new dimension to the experience. This object is available in the arguments of all radial controller input events.

In cases where you may want to suppress the radial menu to prevent it from blocking UI, we now have new properties ActiveControllerWhenMenuIsSuppressed and IsMenuSuppressed to let you configure when the menu is available or suppressed. When a menu is suppressed, it will not appear on press-and-hold interactions for the foreground app. Your app can listen to a new event during menu suppression to give the user an indication the menu is blocked, or build an alternate experience. Here is a code sample for this functionality:

RadialControllerConfiguration config = RadialControllerConfiguration.GetForCurrentView();
config.ActiveControllerWhenMenuIsSuppressed = myController;
config.IsMenuSuppressed = true;
  
myController.ButtonHolding += MyController_ButtonHolding;

User input running on a UI thread can sometimes lead to performance bottlenecks. With the Creator’s Update, radial controller interactions can now be handled on an off-UI thread using RadialControllerIndependentInputSource. Below is an example on how to get additional performance using this method.

RadialController controller;
Windows.UI.Input.Core.RadialControllerIndependentInputSource independentInput;
CoreApplicationView view;
            
view = CoreApplication.GetCurrentView();

var workItemHandler = new WorkItemHandler((IAsyncAction) =>
{
    independentInput = Windows.UI.Input.Core.RadialControllerIndependentInputSource.CreateForView(view);

    controller = independentInput.Controller;

    controller.RotationResolutionInDegrees = 5;

    controller.RotationChanged += Controller_RotationChanged;
    controller.ScreenContactStarted += Controller_ScreenContactStarted;
    controller.ScreenContactContinued += Controller_ScreenContactContinued;
    controller.ScreenContactEnded += Controller_ScreenContactEnded;
    controller.ControlLost += Controller_ControlLost;
    controller.ButtonClicked += Controller_ButtonClicked;
    controller.ButtonPressed += Controller_ButtonPressed;
    controller.ButtonReleased += Controller_ButtonReleased;
    controller.ButtonHolding += Controller_ButtonHolding;
    controller.ControlAcquired += Controller_ControlAcquired;

    // Begin processing input messages as they're delivered.      
    independentInput.Dispatcher.ProcessEvents(CoreProcessEventsOption.ProcessUntilQuit);
});
action = ThreadPool.RunAsync(workItemHandler, WorkItemPriority.High, WorkItemOptions.TimeSliced);

In addition to all the API additions above, you can now customize and easily add new menu items on the Radial Menu. Under “Wheel Settings” in the settings app, you can add application specific menu items that trigger keyboard combinations. Imagine customizing the controller to send your favorite shortcuts in Visual Studio, Photoshop or even when browsing the web!

The Surface Dial continues to excite users and developers alike, with these new enhancements, both developers and users have more control and flexibility in their experience. We invite you to join the numerous applications that have already delivered a great Surface Dial experience, like CorelDRAW, Autodesk’s SketchBook, Silicon Bender’s Sketchable and Algoriddim’s djay Pro. We can’t wait to see what you can do with this unique new form of input on Windows.

Join us in making Windows Ink better!

With Windows Ink and the Surface Dial additions in the Creators Update, we believe we’re just scratching the surface of what Windows Ink can do in people’s lives. Our commitment is to invest in areas that can help you innovate and remove all the barriers to our users using, loving and needing Windows Ink. This involves a spectrum of efforts, from the hardware we build by ourselves and with our partners, to the next SDK additions we make to power you app. As we continue this journey, we invite you to lend us your voice, your ideas and your feedback. Help us help you make the next great application and help us help you change the world. Tweet your ideas using #WindowsInk, email us at [email protected] or tweet us at @WindowsInk. We would love to hear from all of you.

Thank you!

Windows Developer Awards: Honoring Windows Devs at Microsoft Build 2017

As we ramp up for Build, the Windows Dev team would like to thank you, the developer community, for all the amazing work you have done over the past 12 months. Because of your efforts and feedback, we’ve managed to add countless new features to the Universal Windows Platform and the Windows Store in an ongoing effort to constantly improve. And thanks to your input on the Windows Developer Platform Backlog, you have helped us to prioritize new UWP features.

In recognition of all you have done, this year’s Build conference in Seattle will feature the first-ever Windows Developers Awards given to community developers who have built exciting UWP apps in the last year and published them in the Windows Store. The awards are being given out in four main categories:

  • App Creator of the Year – This award recognizes an app leveraging the latest Windows 10 capabilities. Some developers are pioneers, the first to explore and integrate the latest features in Windows 10 releases. This award honors those who made use of features like Ink, Dial, Cortana, and other features in creative ways.
  • Game Creator of the Year – This award recognizes a game by a first-time publisher in Windows Store. Windows is the best gaming platform–and it’s easy to see why. From Xbox to PCs to mixed reality, developers are creating the next generation of gaming experiences. This award recognizes developers who went above and beyond to publish innovative, engaging and magical games to the Windows Store over the last year.
  • Reality Mixer of the Year – This award recognizes the app demonstrating a unique mixed reality experience. Windows Mixed Reality lets developers create experiences that transcend the traditional view of reality. This award celebrates those who choose to mix their own view of the world by blending digital and real-world content in creative ways.
  • Core Maker of the Year – This award recognizes a maker project powered by Windows. Some devs talk about the cool stuff they could build–others just do it. This award applauds those who go beyond the traditional software interface to integrate Windows in drones, PIs, gardens, and robots to get stuff done.

In addition to these, a Ninja Cat of the Year award will be given as special recognition. Selected by the Windows team at Microsoft, this award celebrates the developer or experience that we believe most reflects what Windows is all about, empowering people of action to do great things.

Here’s what we want from you: we need the developer community to help us by voting for the winners of these four awards on the awards site so take a look and tell us who you think has created the most compelling apps. Once you’ve voted, check back anytime to see how your favorites are doing. Voting will end on 4/27, so get your Ninja votes in quickly.

A New Input Paradigm in Windows – The Surface Dial

With the debut of Windows Ink in the Windows Anniversary Update, we introduced simultaneous pen and touch as the dawn of a revolutionary change in interacting with Windows. In our blog post, we discuss how you can use the APIs that you are already familiar with for touch to handle both touch and pen processing at the same time. Now with the recent Microsoft hardware announcements, we’re happy to share another innovation in input with you – the Surface Dial.

picture1

The Surface Dial introduces a new paradigm for input in Windows. The Surface Dial is a new category of input device, which we refer to as a radial controller, and is a revolutionary new tool for the creative process. With tools and shortcuts at your fingertips, the Surface Dial allows you to remain focused on what matters most. You can manipulate images, adjust volume, change color hues and much more, all with simple gestures. With the Surface Dial in one hand and Surface Pen in the other, the creative process is made more productive and more enjoyable. Additionally, you can place your Surface Dial directly on the screen of the Surface Studio and have favorite tools – like a color picker or ruler – at hand and easily accessible on your digital drafting table.

When paired over Bluetooth with a Windows 10 Anniversary Update PC, the Surface Dial delivers a breadth of new experiences to users and opens a world of possibilities. The goal of this blog is to walk you through how you can build your own experiences on the Surface Dial in your application.

Introducing the Radial Controller

For Windows, the Surface Dial represents a totally new type of input device in the system, which we refer to as a radial controller. To go along with this brand-new type of input, Windows has delivered an integrated experience that makes it easier and faster for users to customize and do the things they love – all with a turn of the Dial.

The Surface Dial has a simple set of gestures: It can be rotated, it can be pressed like a button and it can be placed on the screen of the Surface Studio. These gestures are instantly familiar to users and easy to learn. When you press and hold the Surface Dial, a menu experience shows up that presents a selection of tools that can be controlled. These tools offer a variety of functions designed to improve the user’s workflow and keep them immersed in their creativity – from scrolling and zooming, changing volume and controlling media playback, undo and redo, custom keyboard shortcuts and more. It also integrates further with a broad and growing set of in-box and 3rd party apps, unlocking new tools when used with the Windows Ink Workspace, Office, Maps, Groove Music, Sketchable, Bluebeam Revu, Moho 12, Drawboard PDF and more. With the Surface Dial, unlocking new functions for users across every Windows app, they’ll be excited to explore how the Dial can help them in their favorite apps. With the extensibility available through the Windows universal platform, it’s easy for your app to bring that delightful Surface Dial experience they’re searching for!

The first and simplest way to add value with Surface Dial is to use Windows inbox components that come with the Surface Dial integration built-in. For developers who leverage the Windows Ink platform to give their users the power to write, draw, and create with their pen, the InkCanvas and InkToolbar XAML controls populate the Surface Dial’s menu with new tools, allowing users to quickly modify the attributes of their ink, change the thickness of their ink as they write and control the on-screen ruler. This gives you the same great Surface Dial integration available in the Sketchpad and Screen Sketch apps in the Windows Ink Workspace.

picture2

When the InkToolbar and InkCanvas are used, Surface Dial integration is automatically included!

picture3

When the on-screen ruler is visible, the Surface Dial can control its angle and position.

For media players, integrating with the SystemMediaTransportControls will give the same ability to pause, play and skip tracks with the Dial as Groove Music and Spotify.

For developers who want to go beyond the default integration built into the system and create something truly unique, Windows makes it easy for you to add your own tools to this menu through the RadialController platform. The RadialController universal APIs allow you to build your own custom tools for the Surface Dial’s menu and handle Dial input from both Universal Windows Platform apps and classic Win32 apps. You have the option to respond to the button and rotation input available on all Windows devices, or go one step further and build immersive UI experiences for when the Surface Dial is used on-screen on the Surface Studio.

Let’s start by looking at what it takes to build a custom tool experience for the Surface Dial!

Building a Custom Tool for the Surface Dial

Custom tools for the Surface Dial are the best way to deliver a deep and engaging Dial experience for your users. Since a custom tool is personal to your application’s needs, you can identify the shortcuts and functions that matter most to the user and put it right at the user’s fingertips. By optimizing the user’s workflow and integrating with the app’s UI, your custom tool can help the user stay engaged and feel more productive as they work, play, or create with Dial.

To start creating a custom tool for the Surface Dial, the first step is to create an instance of the RadialController interface used to represent the device and interact with the Surface Dial’s menu for the lifetime of your application. Through your instance of the RadialController, you can access the RadialControllerMenu, which gives you the ability to add and remove your own application-specific tools in the Surface Dial’s menu. The RadialController also gives you access to all the input events for the Surface Dial, allowing you to create compelling experiences for your custom tool.

Let’s take a look at building a custom tool inside a sample application. Here we’ll start with a simple inking application using the InkCanvas and InkToolbar controls, which already provide Surface Dial integration for modifying inking attributes and the ruler.

    <Grid x:Name="Container" 
          Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <Grid x:Name="CanvasGrid">
            <InkCanvas x:Name="myCanvas"/>
            <InkToolbar x:Name="myToolbar" 
                        VerticalAlignment="Top" 
                        TargetInkCanvas="{x:Bind myCanvas}" />
        </Grid>
        <StackPanel x:Name="ToolPanel" 
                    HorizontalAlignment="Right" 
                    VerticalAlignment="Top" 
                    Orientation="Vertical" 
                    Width="300" 
                    Background="White" 
                    BorderBrush="Black" 
                    BorderThickness="2">
            <StackPanel>
                <TextBlock Text="Red" Margin="20,5,20,5"/>
                <Slider x:Name="RValue" 
                        LargeChange="1" 
                        Maximum="255" 
                        Margin="20,5,20,5"/>
            </StackPanel>
            <StackPanel>
                <TextBlock Text="Green" Margin="20,5,20,5"/>
                <Slider x:Name="GValue" 
                        LargeChange="1" 
                        Maximum="255" 
                        Margin="20,5,20,5"/>
            </StackPanel>
            <StackPanel>
                <TextBlock Text="Blue" Margin="20,5,20,5"/>
                <Slider x:Name="BValue" 
                        LargeChange="1" 
                        Maximum="255" 
                        Margin="20,5,20,5"/>
            </StackPanel>
            <StackPanel>
                <Grid x:Name="Preview" 
                      Height="100" Width="250" 
                      Margin="0,20,0,20"/>
            </StackPanel>
        </StackPanel>
    </Grid>

Now, let’s add deeper integration with the RadialController APIs and have the Surface Dial control the color of our background. We’ll start by adding a custom tool to the menu:

        RadialController myController;

        public MainPage()
        {
            this.InitializeComponent();
            UpdatePreview();
            highlightedItem = RValue;

            //Hide our custom tool's UI until it is activated by the Dial
            ToolPanel.Visibility = Visibility.Collapsed;

            // Create a reference to the RadialController.
            myController = RadialController.CreateForCurrentView();

            // Create a menu item for the custom tool.
            RadialControllerMenuItem myItem =
              RadialControllerMenuItem.CreateFromKnownIcon("Background", RadialControllerMenuKnownIcon.InkColor);

            //Add the custom tool's menu item to the menu
            myController.Menu.Items.Add(myItem);

            //Create a handler for when the menu item is selected
            myItem.Invoked += MyItem_Invoked;

            //Create handlers for button and rotational input
            myController.RotationChanged += MyController_RotationChanged;
            myController.ButtonClicked += MyController_ButtonClicked;

            //Remove Scroll/Zoom/Undo tools as app doesn't support them
            RadialControllerConfiguration config = RadialControllerConfiguration.GetForCurrentView();
            config.SetDefaultMenuItems(new RadialControllerSystemMenuItemKind[] { RadialControllerSystemMenuItemKind.Volume });

            …
        }

        #region Handling RadialController Input
        private void MyItem_Invoked(RadialControllerMenuItem sender, object args)
        {
            //Make RGB panel visible when the custom menu item is invoked
            ToolPanel.Visibility = Visibility.Visible;
        }

picture4

Since we used the InkToolbar, the menu comes pre-populated with inking tools!

picture5

You can see the new tool we added

The RadialController API provides simple events for handling the input coming from the Dial, from button click, to rotation to the on-screen position. In the previous snippet, we set event handlers for the RotationChanged and ButtonClicked input events from the Surface Dial. Using these events, we can have the input from the Dial modify the red, green, or blue values of our background:

    Slider selectedItem = null;
        FrameworkElement highlightedItem = null;

        private void MyController_ButtonClicked(RadialController sender, RadialControllerButtonClickedEventArgs args)
        {
            if(highlightedItem == Preview)
            {
                //Click on the Preview, update the background
                UpdateBackground();
            }

            else if (selectedItem != null)
            {
                //Click on a selected slider, unselect the slider
                selectedItem = null;
                UpdateHighlight(highlightedItem);
                //decrease sensitivity to make it more comfortable to navigate between items
                myController.RotationResolutionInDegrees = 10;
            }

            else if (selectedItem == null)
            {
                //No selection, select a slider
                UpdateSelection(highlightedItem as Slider);
                //increase sensitivity to make it easier to change slider value
                myController.RotationResolutionInDegrees = 1;
            }
        }

        private void MyController_RotationChanged(RadialController sender, RadialControllerRotationChangedEventArgs args)
        {
            if (selectedItem != null)
            {
                //Change the value on the slider
                selectedItem.Value += args.RotationDeltaInDegrees;
                UpdatePreview();
            }
            else if(args.RotationDeltaInDegrees > 0)
            {
                //Rotation is to the right, change the highlighted item accordingly
                if (highlightedItem == RValue)
                {
                    UpdateHighlight(GValue);
                }
                else if (highlightedItem == GValue)
                {
                    UpdateHighlight(BValue);
                }
                else if (highlightedItem == BValue)
                {
                    UpdateHighlight(Preview);
                }
            }
            else if (args.RotationDeltaInDegrees < 0)
            {
                //Rotation is to the left, change the highlighted item accordingly
                if (highlightedItem == GValue)
                {
                    UpdateHighlight(RValue);
                }
                else if (highlightedItem == BValue)
                {
                    UpdateHighlight(GValue);
                }
                else if (highlightedItem == Preview)
                {
                    UpdateHighlight(BValue);
                }
            }
        }

        private void UpdateHighlight(FrameworkElement element)
        {
            StackPanel parent;

            //Remove highlight state from previous element
            if (highlightedItem != null)
            {
                parent = highlightedItem.Parent as StackPanel;
                parent.BorderThickness = new Thickness(0);
            }

            //Update highlight state for new element
            highlightedItem = element;

            parent = highlightedItem.Parent as StackPanel;
            parent.BorderBrush = new SolidColorBrush(Windows.UI.Colors.Black);
            parent.BorderThickness = new Thickness(2);
        }
        
        private void UpdateSelection(Slider element)
        {
            selectedItem = element;

            //Update selection state for selected slider
            StackPanel parent = element.Parent as StackPanel;
            parent.BorderBrush = new SolidColorBrush(Windows.UI.Colors.Cyan);
            parent.BorderThickness = new Thickness(4);
        }
        
        private void UpdatePreview()
        {
            Windows.UI.Color selectedColor = new Windows.UI.Color();
            selectedColor.A = 255;
            selectedColor.R = (byte) RValue.Value;
            selectedColor.G = (byte) GValue.Value;
            selectedColor.B = (byte) BValue.Value;

            Preview.Background = new SolidColorBrush(selectedColor);
        }

        private void UpdateBackground()
        {
            CanvasGrid.Background = Preview.Background;
        }

picture6

When our custom tool is selected, the tool UI becomes visible. Rotation navigates the menu, and clicking a color value allows you to change it.

picture7

When you’ve found a color that you like, clicking on the preview image will change the background color to the one you’ve customized.

In addition to configuring how rotation interacts with the application, the RadialController APIs also give the ability to modify how rotation is delivered to your app and felt by the user. You can use the RotationResolutionInDegrees property to configure how fine the sensitivity or resolution is, and the UseAutomaticHapticFeedback property to set whether haptic feedback is enabled or disabled. In the previous example, setting the rotation to be more sensitive when changing one of the RGB values made much it easier to manipulate the slider. When not specified, the default value for rotational sensitivity is 10 degrees.

Handling On-Screen Input for Surface Studio

As we called out above, there are 2 modes which a radial controller device can be used in: off-screen and on-screen. When the Surface Dial is placed on the screen of the Surface Studio, the RadialController API gets the location and the bounds of the contact so that you can build richer and more immersive experiences for the user.

Using the Surface Dial’s on-screen position, you can build beautiful UI which centers around the Dial and gives the user richer information about the interactions that they can drive with the Dial. This allows the user to just focus on the control and placement of their hands and not have to worry about other additional menus or controls. As an example of this, take a look at the rich color palette developed by the engineers at Sketchable, or the quick insert menu developed by StaffPad which allows users to quickly add common musical notation markups.

picture8picture9

Going one step further, you also can get context for the intent of the user’s interaction from the on-screen position which can help make your custom tools more compelling. You can see this in the way the Surface Dial guides and drives the on-screen ruler in the Windows Ink Workspace’s Sketchpad, and the engineers at Bluebeam and Drawboard take this same approach with their respective Split Zoom and Ruler features.

Working from the previous example, let’s take advantage of the on-screen position to make it easier for the user to see the results of their color change manipulations, and draw the relevant UI near our Surface Dial’s on-screen position instead of in the corner of the display. Using the ScreenContact* events, we can determine where the Surface Dial is and update our UI accordingly:

  bool isRightHanded;

        public MainPage()
        {
            …

            //Query the user’s handedness
            Windows.UI.ViewManagement.UISettings settings = new Windows.UI.ViewManagement.UISettings();
            isRightHanded = settings.HandPreference == Windows.UI.ViewManagement.HandPreference.RightHanded;

            //Create handlers for when RadialController provides an on-screen position
            myController.ScreenContactStarted += MyController_ScreenContactStarted;
            myController.ScreenContactContinued += MyController_ScreenContactContinued;
            myController.ScreenContactEnded += MyController_ScreenContactEnded;

        }

        private void MyController_ScreenContactStarted(RadialController sender, RadialControllerScreenContactStartedEventArgs args)
        {
            UpdatePanelLocation(args.Contact);
        }

        private void MyController_ScreenContactContinued(RadialController sender, RadialControllerScreenContactContinuedEventArgs args)
        {
            UpdatePanelLocation(args.Contact);
        }

        private void MyController_ScreenContactEnded(RadialController sender, object args)
        {
            ResetPanelLocation();
        }

        private void UpdatePanelLocation(RadialControllerScreenContact contact)
        {
            //When an on-screen position is provided, apply a transform to the panel
            TranslateTransform x = new TranslateTransform();
            if (isRightHanded)
            {
                //Render to the right of the RadialController
                x.X = contact.Position.X + contact.Bounds.Width / 2 + 50;
            }
            else
            {
                //Render to the left of the RadialController
                x.X = contact.Position.X - contact.Bounds.Width / 2 - 50 - ToolPanel.Width;
            }
            x.Y = contact.Position.Y - 200;
            ToolPanel.RenderTransform = x;
            ToolPanel.HorizontalAlignment = HorizontalAlignment.Left;
        }
        private void ResetPanelLocation()
        {
            //When an on-screen position is not provided, clear the transform on the panel
            ToolPanel.RenderTransform = null;
            ToolPanel.HorizontalAlignment = HorizontalAlignment.Right;
        }

When dealing with on-screen input, it’s also important to be aware of whether your application has focus for Surface Dial input. When your application is minimized, another application is moved into the foreground, or the Surface Dial’s menu is opened, your application will lose focus for input and you’ll need to make sure your on-screen UI responds accordingly. On the other hand, when your app is brought into the foreground and focus is restored Surface Dial may already be on the screen of the Surface Studio, and a ScreenContactStarted event won’t be provided. Here’s an example of how to handle focus changes with Surface Dial:

        public MainPage()
        {
            …

            //Create handlers for when RadialController focus changes
            myController.ControlAcquired += MyController_ControlAcquired;
            myController.ControlLost += MyController_ControlLost;
        }


        private void MyController_ControlAcquired(RadialController sender, RadialControllerControlAcquiredEventArgs args)
        {
            //Ensure tool panel is rendered at the correct location when focus is gained
            if (args.Contact != null)
            {
                UpdatePanelLocation(args.Contact);
            }

            ToolPanel.Visibility = Visibility.Visible;
        }

        private void MyController_ControlLost(RadialController sender, object args)
        {
            //Hide tool panel when focus is lost
            ToolPanel.Visibility = Visibility.Collapsed;
            ResetPanelLocation();
        }

Start Creating with Surface Dial

Using what you’ve learned so far about the RadialController APIs, you can now integrate the Surface Dial into your application, handle input and configure the system menu to meet your needs. You can build a huge range of delightful features for your users, ranging from simple modification of values and properties, to driving complex onscreen UI for Surface Dial users on the Surface Studio.

For more information on UX design and best practices with Dial, please consult our Surface Dial development overview, and you can find the full source code used in this project on GitHub.

Surface Dial and the RadialController platform is a new area of investment for Microsoft, and one of the keys to improving the platform and making it more flexible and powerful is getting feedback from our great community of developers! If you have any questions or comments while developing for the Surface Dial, please feel free to send them via email to [email protected].

Windows Ink 3: Beyond Doodling

In the first post in this series, we took a quick look at Windows Ink using the InkCanvas and saw that it can be as simple as one line of code to add inking support to your application. In the second post, we show you how to customize the Ink experience in your app with InkToolbarCustomPen and InkToolbarCustomToolButton, in addition to the out-of-the box items like InkToolbarBallpointPenButton and InkToolbarPenConfigurationControl.

In both of those explorations, we stayed within the context of a drawing style application. In today’s post, we will look at how Windows Ink can be used to bring the natural input experience of using a pen to other types of applications and scenarios.

Pen input can be useful in a majority of applications that require some sort of user input. Here are a few examples of such a scenario:

  • Healthcare: Doctors, nurses, mental health professionals
    • A digital Patient Chart, allowing a medical professional to keep using the efficient natural note keeping of medical shorthand alongside accurate data entry.
  • School: Teachers, students, and administrators
    • A student could take an exam using Windows Ink on a digital exam and the teacher could mark up on that actual exam as if it were paper.
  • Field services: Police, fire, utility engineers
    • Detectives generally keep a small notepad with them to record investigative details. Using ink to input these details allows the notes to be digitally searchable, this allows for faster situational awareness and AI interpretation.
  • Music: composers, musicians
    • Writing notation digitally with a pen combines the world of natural input with the power of digital music processing

Let’s explore two of those possibilities: music and healthcare.

A Music Scenario

Music composition has traditionally been a pen and paper experience. You may or may not have paper with the music staves already printed on it, but in the end, the composer is the one who writes down the notes, key signatures, and other musical notation for a musician to play. Composers have been trained and have years of experience writing music on paper.

What if an application uses a digital pen and the screen as the main method for the composer to create music? The pen input would be a natural way to input the information, but also gain the advantages of having software process that information.

An example of this processing would be for validation of the musical notation; it would also offer a way for the music to be played back immediately after entering that information. There have been many programs in the past that allow for music notation to be entered and played back, but using a pen instead of a keyboard and mouse brings this to a new, natural, level.

A Healthcare Scenario

Healthcare professionals have long used pen and paper to record and convey important information. Sometimes this information is written using a medical shorthand on patient charts. This shorthand contains a lot of information in a smaller area so medical professionals can read a patient’s chart quickly.

However, we also have information that needs to fully written out, like a patient’s name or instructions to a patient for follow-up. This kind of information should be in the form of text clearly readable by anyone and usable for data entry.

We can fulfill both of these requirements with Windows Ink. For the notation and shorthand, we can record the ink strokes as we did previously in the sketching app examples. For the text entry, you can convert the ink using handwriting recognition.

Let’s make a small Medical Chart demo app to see how this is done.

Simple Doctor’s notes app

To show how you can implement enterprise features, let’s use Handwriting Recognition! You can easily get the user’s stroke as text using the InkCanvas and just a few lines of code. This is all built-into the SDK, no extraneous coding or specialized skillset required.

Let’s start with a File > New UWP app and on the MainPage, let’s make three Grid rows. The top two rows will contain two different InkCanvas objects and the last row is for a CommandBar with a save button.

The second row’s InkCanvas will be for the doctor’s handwritten using shorthand. It is more like a sketch app and is tied to an InkToolbar. The ink will be pressure-sensitive and can be further altered using the InkToolbar. You can go back to the last post in this series to see how to do this.

Here’s a quick sketch of what the page layout should be:

picture1

Now that we have a general page layout, let’s focus on the top InkCanvas first. This is the one we’ll use for handwriting recognition for the patient’s name. We want the ink to be plain and clear, so we don’t want an InkToolbar for this InkCanvas.

This code for this row is:

<Grid Grid.Row="1"
     <InkCanvas x:Name="NameInkCanvas" />
</Grid>

Now let’s look at the second row’s InkCanvas. This is the one we want to have an InkToolbar for so the notes can have a rich ink experience. Here’s what that implementation looks like:

<Grid>
    <InkCanvas x:Name="NotesInkCanvas" />

    <InkToolbar TargetInkCanvas="{x:Bind NotesInkCanvas}" 
                HorizontalAlignment="Right"
                VerticalAlignment="Top" />
</Grid>

There are a couple other little things we want to add, for example the TextBlock at the top of the page where the patient’s name would appear after the handwriting recognition is complete. Let’s take a look at the entire page with all the parts in place:

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <Grid.RowDefinitions>
            <RowDefinition />
            <RowDefinition />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>

        <!-- Top row for handwriting recognition of the patient name -->
        <Grid x:Name="PatientInfoGrid">
            <Grid.RowDefinitions>
                <RowDefinition Height="Auto" />
                <RowDefinition />
                <RowDefinition Height="Auto" />
            </Grid.RowDefinitions>

            <TextBlock x:Name="PatientNameTextBlock"
                       Text="Patient Name"
                       Style="{StaticResource TitleTextBlockStyle}"
                       HorizontalAlignment="Center" />
            
            <Grid Grid.Row="1"
                  BorderThickness="2"
                  BorderBrush="#FF9F9F9F">
                <InkCanvas x:Name="NameInkCanvas" />
            </Grid>

            <Button x:Name="RecognizeHandwritingButton"
                    Content="Write patient name in box above and click here to complete"
                    Click="RecognizeHandwritingButton_OnClick"
                    Grid.Row="2"
                    HorizontalAlignment="Center"
                    Margin="5" />
        </Grid>

        <!-- Second row for the doctor's notes -->
        <Grid x:Name="NotesGrid"
              Grid.Row="1">
            <Grid.RowDefinitions>
                <RowDefinition Height="Auto" />
                <RowDefinition />
                <RowDefinition Height="Auto" />
            </Grid.RowDefinitions>

            <TextBlock Text="Notes"
                       Style="{StaticResource SubtitleTextBlockStyle}"
                       HorizontalAlignment="Center" />

            <Grid Grid.Row="1"
                  BorderThickness="2"
                  BorderBrush="#FF9F9F9F">
                <InkCanvas x:Name="NotesInkCanvas" />

                <InkToolbar TargetInkCanvas="{x:Bind NotesInkCanvas}"
                            HorizontalAlignment="Right"
                            VerticalAlignment="Top" />
            </Grid>
        </Grid>
        
        <CommandBar Grid.Row="2">
            <AppBarButton x:Name="SaveChartButton"
                          Icon="Save"
                          Label="Save Chart"
                          Click="SaveChartButton_OnClick"/>
        </CommandBar>
    </Grid>

With the front end done, let’s look at the code-behind and examine the InkCanvas setup and button click event handlers. In the page constructor, we set up some inking attributes for both InkCanvases (put this code after InitializeComponent in the page constructor):

// Setup the top InkCanvas
NameInkCanvas.InkPresenter.InputDeviceTypes =
                Windows.UI.Core.CoreInputDeviceTypes.Mouse |
                Windows.UI.Core.CoreInputDeviceTypes.Pen;
            
NameInkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(new InkDrawingAttributes
{
     Color = Windows.UI.Colors.Black,
     IgnorePressure = true,
     FitToCurve = true
});

// Setup the doctor's notes InkCanvas
NotesInkCanvas.InkPresenter.InputDeviceTypes =
                Windows.UI.Core.CoreInputDeviceTypes.Mouse |
                Windows.UI.Core.CoreInputDeviceTypes.Pen;

NotesInkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(new InkDrawingAttributes
{
    IgnorePressure = false,
    FitToCurve = true
});

To get the patient’s name into the chart, the healthcare worker writes the name in the top InkCanvas and presses the RecognizeHandwritingButton. That button’s click handler is where we do the recognition work. In order to perform handwriting recognition, we use the InkRecognizerContainer object.

var inkRecognizerContainer = new InkRecognizerContainer();

With an instance of InkRecognizerContainer, we call RecognizeAsync and pass it the InkPresenter’s StrokeContainer and InkRecognitionResult — all to tell it to use all the ink strokes.

// Recognize all ink strokes on the ink canvas.
var recognitionResults = await inkRecognizerContainer.RecognizeAsync(
                    NameInkCanvas.InkPresenter.StrokeContainer,
                    InkRecognitionTarget.All);

This will return a list of InkRecognitionResult which you can iterate over and call GetTextCandidates in each iteration. The result of GetTextCandidates is a list of strings that the recognition engine thinks best matches the ink strokes. Generally, the first result is the most accurate, but you can iterate over candidates to find the best match.

Here’s the implementation of the doctor’s note app; you can see that it just uses the first candidate to demonstrate the approach.

 // Iterate through the recognition results, this will loop once for every word detected
foreach (var result in recognitionResults)
{
    // Get all recognition candidates from each recognition result
    var candidates = result.GetTextCandidates();

    // For the purposes of this demo, we'll use the first result
    var recognizedName = candidates[0];

    // Concatenate the results
    str += recognizedName + " ";
}

Here is the full event handler:

private async void RecognizeHandwritingButton_OnClick(object sender, RoutedEventArgs e)
{
    // Get all strokes on the InkCanvas.
    var currentStrokes = NameInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

    // Ensure an ink stroke is present. 
    if (currentStrokes.Count < 1)
    {
        await new MessageDialog("You have not written anything in the canvas area").ShowAsync();
        return;
    }

    // Create a manager for the InkRecognizer object used in handwriting recognition.
    var inkRecognizerContainer = new InkRecognizerContainer();

    // inkRecognizerContainer is null if a recognition engine is not available.
    if (inkRecognizerContainer == null)
    {
        await new MessageDialog("You must install handwriting recognition engine.").ShowAsync();
        return;
    }

    // Recognize all ink strokes on the ink canvas.
    var recognitionResults = await inkRecognizerContainer.RecognizeAsync(
                    NameInkCanvas.InkPresenter.StrokeContainer,
                    InkRecognitionTarget.All);

    // Process and display the recognition results.
    if (recognitionResults.Count < 1)
    {
        await new MessageDialog("No recognition results.").ShowAsync();
        return;
    }

    var str = "";

    // Iterate through the recognition results, this will loop once for every word detected
    foreach (var result in recognitionResults)
    {
        // Get all recognition candidates from each recognition result
        var candidates = result.GetTextCandidates();

        // For the purposes of this demo, we'll use the first result
        var recognizedName = candidates[0];

        // Concatenate the results
        str += recognizedName + " ";
    }

    // Display the recognized name
    PatientNameTextBlock.Text = str;

    // Clear the ink canvas once recognition is complete.
    NameInkCanvas.InkPresenter.StrokeContainer.Clear();
}

Last, although we covered this in detail in the last post, let’s review how to save the doctor’s notes via InkCanvas ink strokes to a GIF file with embedded ink data:

private async void SaveChartButton_OnClick(object sender, RoutedEventArgs e)
{
    // Get all strokes on the NotesInkCanvas.
    var currentStrokes = NotesInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

    // Strokes present on ink canvas.
    if (currentStrokes.Count > 0)
    {
        // Initialize the picker.
        var savePicker = new FileSavePicker();
        savePicker.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;
        savePicker.FileTypeChoices.Add("GIF with embedded ISF", new List<string>() { ".gif" });
        savePicker.DefaultFileExtension = ".gif";

        // We use the patient's name to suggest a file name
        savePicker.SuggestedFileName = $"{PatientNameTextBlock.Text} Chart";
                
        // Show the file picker.
        var file = await savePicker.PickSaveFileAsync();

        if (file != null)
        {
            // Prevent updates to the file until updates are finalized with call to CompleteUpdatesAsync.
            CachedFileManager.DeferUpdates(file);

            // Open a file stream for writing
            using (var stream = await file.OpenAsync(FileAccessMode.ReadWrite))
            using (var outputStream = stream.GetOutputStreamAt(0))
            {
                await NotesInkCanvas.InkPresenter.StrokeContainer.SaveAsync(outputStream);
                await outputStream.FlushAsync();
            }

            // Finalize write so other apps can update file.
            var status = await CachedFileManager.CompleteUpdatesAsync(file);

            if (status == FileUpdateStatus.Complete)
            {
                        PatientNameTextBlock.Text += " (saved!)";
            }
        }
    }
}

Here’s what the app looks like at runtime:

picture2

This is just a simple example of how to combine different uses of Windows Ink, but it demonstrates the usefulness of using Windows Ink in an enterprise scenario and that it’s much more than just a doodling tool.

The patient’s name was recognized and placed in the TextBlock at the top of the app and the doctor’s notes on the bottom can be saved to a file and reloaded exactly as it was written.

Here’s what the doctor’s notes file looks like in Windows File Explorer after it’s been saved. It’s a GIF but also has the embedded ink data that you can load back into the app as ink strokes.

picture3

What’s next?

Think about how you can add inking support to your next application applications. How can adding natural use input with a pen help your user enter data in a seamless and delightful manner? You can add Inking support with just a few lines of code and bring the Windows Ink experience to your users.

We look forward to the exciting app ideas and scenarios you create using Windows Ink. Let us know what you create by leaving us a comment below, sending us a tweet or post on our Facebook page.

Resources

Windows Ink 2: Digging Deeper with Ink and Pen

In the last post, we explored a brief history of pen computing and introduced you to how easy it is to get started with Windows Ink in your Universal Windows Platform app. You saw that you can enable inking by adding a single line of code, an InkCanvas, to your app to enable inking. You also saw that adding another single line of code, the InkToolbar, gives the user additional pen-related tools like pen-stroke color and stroke type.

In this post, we’ll dig deeper into how we can further customize the pen and ink experience to make your application a delightful inking experience for the user. Let’s build a Coloring Book application!

Customizing The Inking Experience

Getting Started

To get started, let’s put in an InkCanvas on the page:

<InkCanvas x:Name="myInkCanvas"/>

By default, the InkCanvas’s input is set to only accept strokes from a Pen. However, we can change that by setting the InputDeviceTypes property of the InkCanvas’s InkPresenter. In the page constructor, we want to configure the InkCanvas so that it works for pen, mouse and touch:

myInkCanvas.InkPresenter.InputDeviceTypes = Windows.UI.Core.CoreInputDeviceTypes.Pen 
                | Windows.UI.Core.CoreInputDeviceTypes.Mouse 
                | Windows.UI.Core.CoreInputDeviceTypes.Touch;

As we did in the last article, we’ll add an InkToolbar and bind it to myInkCanvas, but this time we’re going to put it within a CommandBar. This is so we can keep it next the other buttons that we’ll add later, like Save and Share.

<CommandBar Name="myCommandBar" IsOpen="True" >
    <CommandBar.Content>
        <InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}"/>
    </CommandBar.Content>
</CommandBar>

Note: If you see a XAML designer error when you add the InkToolbar, you can safely ignore this as it is a known issue that is being worked on. Your code will run fine.

However, this time, we also want to provide the user with some additional InkToolbar options. We have two main ways to do this using the InkToolbar, we can use a

  • Built-in InkToolbar pen button
  • Custom InkToolbar pen button

Built-in InkToolbar pens

Let’s start with an example of a built-in option, the InkToolbarBallPointPenButton. This is an ‘out-of-the-box’ InkToolbar button that, when selected in the InkToolbar, activates the BallPointPen. To add this, you place it within the InkToolbar’s content, like so:

<CommandBar Name="myCommandBar" IsOpen="True" >
    <CommandBar.Content>
        <InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}">
            <InkToolbarBallpointPenButton Name="penButton" />
        </InkToolbar>
    </CommandBar.Content>
</CommandBar>

If you ran the app now, your InkToolbar would look like this:

picture1

Custom InkToolbar Pens

Creating a custom pen is rather straightforward and requires very little code. Let’s start with the basic requirement: We need to create a class that inherits from InkToolbarCustomPen and give it some attributes that define how it will draw.  Let’s take this step by step and make a custom highlighter marker.

First, let’s add a new class to your project.  Name the class “MarkerPen,” add the following using statements and inherit from InkToolbarCustomPen:

using Windows.UI;
using Windows.UI.Input.Inking;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

class MarkerPen : InkToolbarCustomPen
{
}

In this class, we only need to override the CreateInkDrawingAttributesCore method. Add the following method to the class now:

protected override InkDrawingAttributes CreateInkDrawingAttributesCore(Brush brush, double strokeWidth)
{
}

Within that method we can start setting some drawing attributes. This is done by making an instance of InkDrawingAttributes and setting some properties. Here are the attributes I’d like the pen to have:

  • Act like a highlighter
  • Has a round pen tip shape
  • Has a red stroke color as the default color
  • Be twice as thick as the user’s stroke

Here’s how we can fulfill those requirements:

InkDrawingAttributes inkDrawingAttributes = new InkDrawingAttributes();

// Set the PenTip (can also be a rectangle)
inkDrawingAttributes.PenTip = PenTipShape.Circle;

// Set the default color to Red 
SolidColorBrush solidColorBrush = brush as SolidColorBrush;
inkDrawingAttributes.Color = solidColorBrush?.Color ?? Colors.Red;

// Make sure it draws as a highlighter
inkDrawingAttributes.DrawAsHighlighter = true;

// Set the brush stroke
inkDrawingAttributes.Size = new Windows.Foundation.Size(strokeWidth * 2, strokeWidth * 2);

return inkDrawingAttributes;

That’s it, your custom pen is done. Here’s the completed class:

using Windows.UI;
using Windows.UI.Input.Inking;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

class MarkerPen : InkToolbarCustomPen
{
        protected override InkDrawingAttributes CreateInkDrawingAttributesCore(Brush brush, double strokeWidth)
        {
            InkDrawingAttributes inkDrawingAttributes = new InkDrawingAttributes();
            inkDrawingAttributes.PenTip = PenTipShape.Circle;
            SolidColorBrush solidColorBrush = brush as SolidColorBrush;
            inkDrawingAttributes.Color = solidColorBrush?.Color ?? Colors.Red;
            inkDrawingAttributes.DrawAsHighlighter = true;
            inkDrawingAttributes.Size = new Windows.Foundation.Size(strokeWidth * 2, strokeWidth * 2);
            return inkDrawingAttributes;
        }
}

Now, let’s go back to the page where you have your InkToolbar and InkCanvas. We want to create Resources section for your page that contains a StaticResource instance of the custom pen. So, just above the root Grid element, add the following Resources code:

<Page ...> 

    <Page.Resources>
        <local:MarkerPen x:Key="MarkerPen"/>
    </Page.Resources>

    <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
...
    </Grid>
</Page>

A quick note about XAML Resources: The page’s resources list is a key/value dictionary of objects that you can reference using the resource’s key. We’ve created an instance of our MarkerPen class, local:MarkerPen, and given it a key value of “MarkerPen” (if you want to learn more about XAML resources, see here).

We can now use that key in a InkToolbarCustomPenButton’s CustomPen property. This is better explained by the code. Let’s break it down:

In your InkToolbar, add an InkToolbarCustomPen and give it a name:

<InkToolbar>
   <InkToolbarCustomPenButton Name="markerButton"></InkToolbarCustomToolButton>
</InkToolbar>

The InkToolbarCustomPen has a CustomPen property:

<InkToolbarCustomPenButton Name="markerButton" CustomPen="">

We can now set that CustomPen property using the key of our resource:

<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}">

Now, let’s set the SymbolIcon for the button:

<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}">
    <SymbolIcon Symbol="Highlight" />
</InkToolbarCustomPenButton>

Next, let’s add an InkToolbarPenConfigurationControl:

<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}">
    <SymbolIcon Symbol="Highlight" />
    <InkToolbarCustomPenButton.ConfigurationContent>
         <InkToolbarPenConfigurationControl />
    </InkToolbarCustomPenButton.ConfigurationContent>
</InkToolbarCustomPenButton>

Let’s take a look at what the InkToolbarPenConfigurationControl does for you. Even with a custom implementation of a pen, you still get to use the out-of-the-box Windows Ink components. If the user clicks on your pen after it’s selected, they’ll get a fly-out containing options to change the color and the size of the pen!

However, there’s one little tweak we want to make. By default, you get Black and White as the only colors in the flyout:

picture1

We want a lot of colors, and fortunately, the BallpointPenButton you added earlier has a palette full of colors. We can just use that same palette for our custom pen by binding to it:

<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}" Palette="{x:Bind penButton.Palette}" >

Now, here’s what the pen configuration control looks after binding the Palette:

picture3

Whew, okay, the toolbar is coming along nicely! Here’s what we have so far for our CommandBar:

<CommandBar Name="myCommandBar" IsOpen="True">
    <CommandBar.Content>
        <InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}">
            <InkToolbarBallpointPenButton Name="penButton" />
            <InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}" Palette="{x:Bind penButton.Palette}" >
                <SymbolIcon Symbol="Highlight" />
                <InkToolbarCustomPenButton.ConfigurationContent>
                    <InkToolbarPenConfigurationControl />
                </InkToolbarCustomPenButton.ConfigurationContent>
            </InkToolbarCustomPenButton>
        </InkToolbar>
    </CommandBar.Content>
</CommandBar>

Now, let’s start adding some commands.

Custom InkToolbar Tool Buttons

The first thing you’d really want in a drawing application is the ability to undo something. To do this we’ll want to add another button to the toolbar; this is easily done using an InkToolbarCustomToolButton. If you’re familiar with adding buttons to a CommandBar, you’ll feel right at home.

In your InkToolbar, add an InkToolbarCustomToolButton and give it a name, “undoButton.”

<InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}" Palette="{x:Bind penButton.Palette}" >
...
    <InkToolbarCustomToolButton Name="undoButton"></InkToolbarCustomToolButton>
</InkToolbar>

The button has your familiar button properties, such as a Click event and supporting a SymbolIcon for content, so let’s add those as well.

Here’s what your XAML should look like:

<InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}" Palette="{x:Bind penButton.Palette}">
...
    <InkToolbarCustomToolButton Name="undoButton" Click="Undo_Click" >
        <SymbolIcon Symbol="Undo"/>
    </InkToolbarCustomToolButton>
</InkToolbar>

Now, let’s go to the button’s click event handler.  Here we can do the following to undo strokes that were applied to the InkPresenter, here are the steps:

First, make sure you add the following using statement to the code-behind:

using Windows.UI.Input.Inking;

Then get all the strokes in the InkPresenter’s StrokeContainer:

IReadOnlyList<InkStroke> strokes = myInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

Next, verify that there are strokes to undo before proceeding:

if (strokes.Count > 0)

If there are strokes, select the last one in the container:

strokes[strokes.Count - 1].Selected = true;

Finally, delete that selected stroke using DeleteSelected():

myInkCanvas.InkPresenter.StrokeContainer.DeleteSelected();

As you can see, it’s pretty easy to get access to the strokes that were made by the user and just as easy to remove a stroke. Here is the complete event handler:

private void Undo_Click(object sender, RoutedEventArgs e)
{
    // We can get a list of the strokes that are in the InkPresenter
    IReadOnlyList<InkStroke> strokes = myInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

    // Make sure there are strokes to undo
    if (strokes.Count > 0)
    {
       // select the last stroke
       strokes[strokes.Count - 1].Selected = true;

       // Finally, delete the stroke
       myInkCanvas.InkPresenter.StrokeContainer.DeleteSelected();
    }
}

Final InkCanvas configuration

Before we conclude the drawing logic, we need to make sure the page loads with some InkDrawingAttributes presets and InkPresenter configuration. To do this, we can hook into the InkCanvas’s Loaded event.

We can do this in the XAML:

<InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}" Palette="{x:Bind penButton.Palette}" Loaded="InkToolbar_Loaded">

The attributes are set in a similar way that we set them for the custom pen, instantiate an InkDrawingAttributes object and set some properties. However, this time, we’re passing those attributes to the InkPresenter.

Additionally, a few other things thing should be addressed:

  • Give the custom pen the same color palette as the ballpoint pen
  • Set the initial active tool
  • Make sure that users can also use the mouse

Here’s the code for the InkCanvas’s Loaded event handler:

private void InkToolbar_Loaded(object sender, RoutedEventArgs e)
{
   // Create an instance of InkDrawingAttributes
    InkDrawingAttributes drawingAttributes = new InkDrawingAttributes();

    // We want the pen pressure to be applied to the user's stroke
    drawingAttributes.IgnorePressure = false;

    // This will set it to that the ink stroke will use a Bezier curve instead of a collection of straight line segments
    drawingAttributes.FitToCurve = true;

    // Update the InkPresenter with the attributes
    myInkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);

    // Set the initial active tool to our custom pen
    myInkToolbar.ActiveTool = markerButton;

    // Finally, make sure that the InkCanvas will work for a pen, mouse and touch
    myInkCanvas.InkPresenter.InputDeviceTypes = Windows.UI.Core.CoreInputDeviceTypes.Pen 
                | Windows.UI.Core.CoreInputDeviceTypes.Mouse 
                | Windows.UI.Core.CoreInputDeviceTypes.Touch;
}

Saving, Sharing and Loading

Now that you’ve got a decent working area, we want to be able to save, load and share the user’s work. In the last post, we showed a simple way to save and load the canvas. However, in our Coloring Book app, we want to have the image and the ink data saved separately so that we can easily share the image for display and sharing purposes, but save, load and edit inking data as well.

Saving Ink Data

As we did in the last post, you can save the ink strokes to a file using the StrokeContainer’s SaveAsync method. What we’ll do differently here is right after we’ve saved the ink file, we’ll also save a parallel image file in the cache. Although we’re able to embed the stroke data into the gif we saved, having a temporary image stored in the cache makes sharing and displaying the image in the app more convenient.

So, at the end of your save button’s click handler, you want to create a new (or get an existing) StorageFile for the image:

// Save inked image.
StorageFile myInkedImageFile = await folder.CreateFileAsync(Constants.inkedImageFile, CreationCollisionOption.ReplaceExisting);
await Save_InkedImagetoFile(myInkedImageFile);

Next, we pass the myInkedImageFile StorageFile reference to the Save_InkedImageToFile method, which saves the image to the file:

private async Task Save_InkedImagetoFile(StorageFile saveFile)
{
    if (saveFile != null)
    {
…
        using (var outStream = await saveFile.OpenAsync(FileAccessMode.ReadWrite))
        {
            await Save_InkedImageToStream(outStream);
        }
…
     }
}

And finally, we get that bitmap from the canvas into the file in the Save_InkedImageToStream method; this is where we leverage Win2D to get a great looking bitmap from the canvas:

private async Task Save_InkedImageToStream(IRandomAccessStream stream)
{
    var file = await StorageFile.GetFileFromApplicationUriAsync(((BitmapImage)myImage.Source).UriSource);

    CanvasDevice device = CanvasDevice.GetSharedDevice();

    var image = await CanvasBitmap.LoadAsync(device, file.Path);

    using (var renderTarget = new CanvasRenderTarget(device, (int)myInkCanvas.ActualWidth, (int)myInkCanvas.ActualHeight, image.Dpi))
    {
        using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
        {
            ds.Clear(Colors.White); 
            ds.DrawImage(image, new Rect(0, 0, (int)myInkCanvas.ActualWidth, (int)myInkCanvas.ActualHeight));
            ds.DrawInk(myInkCanvas.InkPresenter.StrokeContainer.GetStrokes());
         }

         await renderTarget.SaveAsync(stream, CanvasBitmapFileFormat.Png);
    }
}

You might ask, why is there a separate method for getting the stream instead of doing it in one place? The first reason is that we want to be a responsible developer and make sure our method names define what action the methods perform. But more importantly, we want to reuse this method later to share the user’s art. With a stream, it’s not only easier to share, you can even send the image to a printer.

Sharing the result

Now that the image is saved, we can share it. The approach here is the same as other UWP sharing scenarios. You want to use the DataTransferManager; you can find many example of how to use this here in the Official UWP samples on GitHub.

For the purposes of this article, we’ll focus only on the DataTransferManager’s DataRequested method. You can see the full sharing code for this here in the Coloring Book demo on GitHub). This is where the Save_InkedImageToStream method gets to be reused!

private async void DataRequested(DataTransferManager sender, DataRequestedEventArgs e)
{
    DataRequest request = e.Request;
    DataRequestDeferral deferral = request.GetDeferral();
    request.Data.Properties.Title = "A Coloring Page";
    request.Data.Properties.ApplicationName = "Coloring Book";
    request.Data.Properties.Description = "A coloring page sent from my Coloring Book app!";
    using (InMemoryRandomAccessStream inMemoryStream = new InMemoryRandomAccessStream())
    {
        await Save_InkedImageToStream(inMemoryStream);
         request.Data.SetBitmap(RandomAccessStreamReference.CreateFromStream(inMemoryStream));
    }
            deferral.Complete();
}

Loading Ink Data from a file

In our Coloring Book app, we want the user to continue working on previous drawings as if they never stopped. We’re able to save the ink file and capture and save the image of the work, but we also need to load the ink data properly.

In the last post we covered how to load up the stroke from the file; let’s review this now.

// Get a reference to the file that contains the inking stroke data
StorageFile inkFile = await folder.GetFileAsync(Constants.inkFile);

if (inkFile != null)
{
    IRandomAccessStream stream = await inkFile.OpenAsync(Windows.Storage.FileAccessMode.Read);

    using (var inputStream = stream.GetInputStreamAt(0))
    {
        // Load the strokes back into the StrokeContainer
        await myInkCanvas.InkPresenter.StrokeContainer.LoadAsync(stream);
    }

    stream.Dispose();
}

That’s all there is to loading sketch’s ink data. All the strokes, and the ink’s attributes, will be loaded into the InkCanvas and the user can continue working on his or her creation.

In the next post, we’ll look at some other real-world applications of Windows Ink and how inking can empower educational and enterprise applications. We’ll also take a look at some of the new hardware and APIs available that make using Windows Ink a go-to item for design professionals.

Resources

Windows Ink 1: Introduction to Ink and Pen

Using a pen and computer has an interesting history that goes farther back than you’d think. In 1888, the first patent for a “electric stylus device for capturing handwriting” was issued to Elisha Gray for the Telautograph. In fact, pen input was being used 20 years before mouse and GUI input with systems like the Styalator tablet demonstrated by Tim Diamond in the 1950s and the RAND tablet in the 1960s, both could recognize free hand writing and turn it into computer recognizable characters and words.

In 1992, Microsoft made its first major entrance into the pen input space with Windows for Pen Computing and also had the NCR tablet that ran Windows 3.1 with pen input as an option to interact with applications.

New ways to use Windows Ink

In the Windows 10 Anniversary Update, Inking (pen input) has taken front stage. Microsoft recently announced the Surface Studio. An All in One machine, designed to empower the creative process with a 28 inch, Pen enabled, PixelSense screen. With such a large working area for the Pen and the thin profile of the PC, the user can focus on what matters, the art.

In addition to having the work front and center, the user can now use new methods of input, such as the Surface Dial, to leverage your application’s inking features. As a developer, you can leverage the Radial Controller APIs to make accessing those inking features a natural and smooth experience for the user.

Let’s start exploring Windows Ink from two perspectives, the consumer and the developer.

User’s Perspective

On PC with stylus support, the Windows Ink Workspace is front and center in the system tray. For the consumer, this a highly convenient option to quickly access the applications in the Workspace; Sticky Notes, Sketchpad and Screensketch, as you see here:

picture1

Depending on the PC’s pen you’re using, the pen can provide some natural interactions even for you start writing on the screen. Using a Surface Book as an example, the Surface Pen lets you quickly launch an application by clicking the pen’s eraser. One click, a double click or a click and hold can perform three different things. Which action is taken depends on what is set by the user, this option is highly configurable from the PC’s Pen Settings page, as seen here:

picture2

There are other settings you can configure to further customize your experience. Windows 10 already ignores when your palm is touching the screen while you’re writing, but you may want to completely ignore touch altogether. These options can be set on the same settings pane:

picture3

Ignoring touch input while using the pen is disabled by default because there are great simultaneous pen and touch scenarios. A good example of this would be the Windows Ink ruler! You can use one hand for the pen and the other hand to move the ruler on the screen.

Now that’s we’ve taken a high level look at the Windows 10 Anniversary Update’s inking features, let’s switch gears and take a look at it from a developer’s perspective.

Developer’s Perspective

Pen input and handwriting recognition traditionally has needed a specialized developer skillset. You would have to detect the strokes made to the canvas and use complex algorithms to determine what character was written. In the Windows 10 Anniversary Update SDK, this is no longer the case. You can add inking support to your application with just a couple lines of code.

Let’s make a small example that lets the user draw to an area of your UWP (Universal Windows Application) app. This example can be added to any UWP app that is using the Anniversary SDK.

To enable inking, you only need to add the following to your XAML.

<InkCanvas x:Name="inkCanvas" />

That’s it! Where you placed the InkCanvas UIElement is where the user can use a pen and draw on it with the default Ink settings. Here’s what it looks like at runtime after I’ve written a special message:

picture4

The InkCanvas built-in defaults makes it very easy to get started. However, what if you wanted to let the user change the color of the ink, or the thickness of the stroke? You can add this functionality quickly by adding an InkToolbar UIElement to your XAML. The only thing you need to do to wire it up, is tell it what InkCanvas is to be used for:

<InkToolbar x:Name="inkToolbar" TargetInkCanvas="{x:Bind inkCanvas}" />

Note: If you see a XAML designer error when you add the InkToolbar, you can safely ignore this as it is a known issue that is being worked on. Your code will run fine.

Let’s rerun our test app and see what this looks after using a couple of the InkToolbar’s default tools; the ruler and changing the ink color:

picture5

This is all you need to having inking enabled in the app, however you might want to save the user’s strokes so that they can be saved and reloaded at another time.

Saving and Loading Ink

You can embed the ink data within a GIF file so that you can save and load the user’s work. This is easily done using the InkPresenter, which is available as a read-only property of the InkCanvas.

Here’s an example of getting all the ink that’s on the canvas and saving it to a file:

        private async Task SaveInkAsync()
        {
            if (inkCanvas.InkPresenter.StrokeContainer.GetStrokes().Count > 0)
            {
                // Select a StorageFile location and set some file attributes
                var savePicker = new Windows.Storage.Pickers.FileSavePicker();
                savePicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;
                savePicker.FileTypeChoices.Add("Gif with embedded ISF", new List<string> {".gif"});

                var file = await savePicker.PickSaveFileAsync();

                if (null != file)
                {
                    using (IRandomAccessStream stream = await file.OpenAsync(FileAccessMode.ReadWrite))
                    {
                        // This single method will get all the strokes and save them to the file
                        await inkCanvas.InkPresenter.StrokeContainer.SaveAsync(stream);
                    }
                }
            }
        }

Then, the next time the user wants to load in an old drawing, or maybe you want to properly resume an application that was terminated, you only need to load that file back into the canvas. To do this, it’s just as easy as saving it:

        private async Task LoadInkAsync()
        {
            // Open a file picker
            var openPicker = new Windows.Storage.Pickers.FileOpenPicker();
            openPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;

            // filter files to show both gifs (with embedded isf) and isf (ink) files
            openPicker.FileTypeFilter.Add(".gif");
            openPicker.FileTypeFilter.Add(".isf");

            var file = await openPicker.PickSingleFileAsync();

            if (null != file)
            {
                using (var stream = await file.OpenSequentialReadAsync())
                {
                    // Just like saving, it's only one method to load the ink into the canvas
                    await inkCanvas.InkPresenter.StrokeContainer.LoadAsync(stream);
                }
            }
        }

To see this code, and many other demos, take a look at the SimpleInk demo from the official Universal Windows Platform examples Github page.

What’s next?

Getting started with Windows Ink is quick and easy. However, you can also create some highly customized inking applications. In the next Windows Ink series post, we’ll dig deeper into the InkPresenter, Pen Attributes, Custom Pens, Custom InkToolBar and explore a more complex ink data  scenario that enables sharing and printing!

Resources

Getting personal – speech and inking (App Dev on Xbox series)

The way users interact with apps on different devices has gotten much more personal lately, thanks to a variety of new Natural User Interface features in the Universal Windows Platform. These UWP patterns and APIs are available for developers to easily bring in capabilities for their apps that enable more human technologies. For the final blog post in the series, we have extended the Adventure Works sample to add support for Ink on devices that support it, and to add support for speech interaction where it makes sense (including both synthesis and recognition). Make sure to get the updated code for the Adventure Works Sample from the GitHub repository so you can refer to it as you read on.

And in case you missed the blog post from last week on how to enable great social experiences, we covered how to connect your app to social networks such as Facebook and Twitter, how to enable second screen experiences through Project “Rome”, and how to take advantage of the UWP Maps control and make your app location aware. To read last week’s blog post or any of the other blog posts in the series, or to watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works (v3)

picture1

We are continuing to build on top of the Adventure Works sample app we worked with in the previous two blog posts. If you missed those, make sure to check them out here and here. As a reminder, Adventure Works is a social photo app that allows the user to:

  • Capture, edit, and store photos for a specific trip
  • auto analyze and auto tag friends using Cognitive Services vision APIs
  • view albums from friends on an interactive map
  • share albums on social networks like Facebook and Twitter
  • Use one device to remote control slideshows running on another device using project Rome
  • and more …

There is always more to be done, and for this final round of improvements we will focus on two sets of features:

  1. Ink support to annotate images, enable natural text input, as well as the ability to use inking as a presentation tool in connected slideshow mode.
  2. Speech Synthesis and Speech Recognition (with a little help from cognitive services for language understanding) to create a way to quickly access information using speech.

More Personal Computing with Ink

Inking in Windows 10 allows users with Inking capable devices to draw and annotate directly on the screen with a device like the Surface Pen – and if you don’t have a pen handy, you can use your finger or a mouse instead. Windows 10 built-in apps like Sticky Notes, Sketchpad and Screen sketch support inking, as do many Office products. Besides preserving drawings and annotations, inking also uses machine learning to recognize and convert ink to text. OneNote goes a step further by recognizing shapes and equations in addition to text.

picture2

Best of all, you can easily add Inking functionality into your own apps, as we did for Adventure Works,  with one line of XAML markup to create an InkCanvas. With just one more line, you can add an InkToolbar to your canvas that provides a color selector as well as buttons for drawing, erasing, highlighting, and displaying a ruler. (In case you have the Adventure Works project open, the InkCanvas and InkToolbar implementation can be found in PhotoPreviewView.)

<InkCanvas x:Name=”Inker”></InkCanvas>
<InkToolbar TargetInkCanvas=”{x:Bind Inker}” VerticalAlignment=”Top”/>

The InkCanvas allows users to annotate their Adventure Works slideshow photos. This can be done both directly as well as remotely through the Project “Rome” code highlighted in the previous post. When done on the same device, the ink strokes are saved off to a GIF file which is then associated with the original slideshow image.

picture3

When the image is displayed again during later viewings, the strokes are extracted from the GIF file, as shown in the code below, and inserted back into a canvas layered on top of the image in PhotoPreviewView. The code for saving and extracting ink strokes are found in the InkHelpers class.

var file = await StorageFile.GetFileFromPathAsync(filename);
if (file != null)
{
    using (var stream = await file.OpenReadAsync())
    {
        inker.InkPresenter.StrokeContainer.Clear();
        await inker.InkPresenter.StrokeContainer.LoadAsync(stream);
    }
}

Ink strokes can also be drawn on one device (like a Surface device) and displayed on another one (an Xbox One). In order to do this, the Adventure Works code actually collects the user’s pen strokes using the underlying InkPresenter object that powers the InkCanvas. It then converts the strokes into a byte array and serializes them over to the remote instance of the app. You can find out more about how this is implemented in Adventure Works by looking through the GetStrokeData method in SlideshowSlideView control and the SendStrokeUpdates method in SlideshowClientPage.

It is sometimes useful to save the ink strokes and original image in a new file. In Adventure Works, this is done to create a thumbnail version of an annotated slide for quick display as well as for uploading to Facebook. You can find the code used to combine an image file with an ink stroke annotation in the RenderImageWithInkToFIleAsync method in the InkHelpers class. It uses the Win2D DrawImage and DrawInk methods of a CanvasDrawingSession object to blend the two together, as shown in the snippet below.

CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, (int)inker.ActualWidth, (int)inker.ActualHeight, 96);

var image = await CanvasBitmap.LoadAsync(device, imageStream);
using (var ds = renderTarget.CreateDrawingSession())
{
    var imageBounds = image.GetBounds(device);
                
    //...

    ds.Clear(Colors.White);
    ds.DrawImage(image, new Rect(0, 0, inker.ActualWidth, inker.ActualWidth), imageBounds);
    ds.DrawInk(inker.InkPresenter.StrokeContainer.GetStrokes());
}

Ink Text Recognition

picture4

Adventure Works also takes advantage of Inking’s text recognition feature to let users handwrite the name of their newly created Adventures. This capability is extremely useful if someone is running your app in tablet mode with a pen and doesn’t want to bother with the onscreen keyboard. Converting ink to text relies on the InkRecognizer class. Adventure Works encapsulates this functionality in a templated control called InkOverlay which you can reuse in your own code. The core implementation of ink to text really just requires instantiating an InkRecognizerContainer and then calling its RecognizeAsync method.

var inkRecognizer = new InkRecognizerContainer();
var recognitionResults = await inkRecognizer.RecognizeAsync(_inker.InkPresenter.StrokeContainer, InkRecognitionTarget.All);

You can imagine this being very powerful when the user has a large form to fill out on a tablet device and they don’t have to use the onscreen keyboard.

More Personal Computing with Speech

There are two sets of APIs that are used in Adventure Works that enable a great natural experience using speech. First, UWP speech APIs allow developers to integrate speech-to-text (recognition) and text-to-speech (synthesis) into their UWP apps. Speech recognition converts words spoken by the user into text for form input, for text dictation, to specify an action or command, and to accomplish tasks. Both free-text dictation and custom grammars authored using Speech Recognition Grammar Specification are supported.

Second, Language Understanding Intelligent Service (LUIS) is a Microsoft Cognitive Services API that uses machine learning to help your app figure out what people are trying to say. For instance, if someone wants to order food, they might say “find me a restaurant” or “I’m hungry” or “feed me”. You might try a brute force approach to recognize the intent to order food, listing out all the variations on the concept “order food” that you can think of – but of course you’re going to come up short. LUIS lets you set up a model for the “order food” intent that learns, over time, what people are trying to say.

In Adventure Works, these features are combined to create a variety of speech related functionalities. For instance, the app can listen for an utterance like “Adventure Works, start my latest slideshow” and it will naturally open a slideshow for you when it hears this command. It can also respond using speech when appropriate to answer a question. LUIS, in turn, augments this speech recognition with language understanding to improve the recognition of natural language phrases.

picture5

The speech capabilities for our app are wrapped in a simple assistant called Adventure Works Aide (look for AdventureWorksAideView.xaml). Saying the phrase “Adventure Works…” will invoke it. It will then listen for spoken patterns such as:

  • “What adventures are in <location>.”
  • “Show me <person> adventure.”
  • “Who is closes to me.”

Adventure Works Aide is powered by a custom SpeechService class. There are two SpeechRecognizer instances that are used at different times, first to recognize the “Adventure Works” phrase at any time:

_continousSpeechRecognizer = new SpeechRecognizer();
_continousSpeechRecognizer.Constraints.Add(new SpeechRecognitionListConstraint(new List&amp;lt;String&amp;gt;() { &amp;quot;Adventure Works&amp;quot; }, &amp;quot;start&amp;quot;));
var result = await _continousSpeechRecognizer.CompileConstraintsAsync();
//...
await _continousSpeechRecognizer.ContinuousRecognitionSession.StartAsync(SpeechContinuousRecognitionMode.Default);
and then to understand free form natural language and convert it to text:
_speechRecognizer = new SpeechRecognizer();
var result = await _speechRecognizer.CompileConstraintsAsync();
SpeechRecognitionResult speechRecognitionResult = await _speechRecognizer.RecognizeAsync();
if (speechRecognitionResult.Status == SpeechRecognitionResultStatus.Success)
{
    string str = speechRecognitionResult.Text;
}

As you can see, the SpeechRecognizer API is used for both listening continuously for specific constraints throughout the lifetime of the app, or to convert any free-form speech to text at a specific time. The continuous recognition session can be set to recognize phrases from a list of strings, or it can even use a more structured SRGS grammar file which provides the greatest control over the speech recognition by allowing for multiple semantic meanings to be recognized at once. However, because we want to understand every variation the user might say and use LUIS for our semantic understanding, we can use the free-form speech recognition with the default constraints.

Note: before using any of the speech APIs on Xbox, the user must give permission to your application to access the microphone. Not all APIs automatically show the dialog currently so you will need to invoke the dialog yourself. Checkout the CheckForMicrophonePermission function in SpeechService.cs to see how this is done in Adventure Works.

When the continuous speech recognizer recognizes the key phrase, it immediately stops listening, shows the UI for the AdventureWorksAide to let the user know that it’s listening, and starts listening for natural language.

await _continousSpeechRecognizer.ContinuousRecognitionSession.CancelAsync();
ShowUI();
SpeakAsync(&amp;quot;hey!&amp;quot;);
var spokenText = await ListenForText();

Subsequent utterances are passed on to LUIS which uses training data we have provided to create a machine learning model to identify specific intents. For this app, we have three different intents that can be recognized: showuser, showmap, and whoisclosest (but you can always add more). We have also defined an entity for username for LUIS to provide us with the name of the user when the showuser intent has been recognized. LUIS also provides several pre-built entities that have been trained for specific types of data; in this case, we are using an entity for geography locations in the showmap intent.

picture6

To use LUIS in the app, we used the official nugget library which allowed us to register specific handlers for each intent when we send over a phrase.

var handlers = new LUISIntentHandlers();
_router = IntentRouter.Setup(Keys.LUISAppId, Keys.LUISAzureSubscriptionKey, handlers, false);
var handled = await _router.Route(text, null);

Take a look at the HandleIntent method in the LUISAPI.cs file and the LUISIntentHandlers class which handles each intent defined in the LUIS portal, and is a useful reference for future LUIS implementations.

Finally, once the text has been processed by LUIS and the intent has been processed by the app, the AdventureWorksAide might need to respond back to the user using speech, and for that, the SpeechService uses the SpeechSynthesizer API:

_speechSynthesizer = new SpeechSynthesizer();
var syntStream = await _speechSynthesizer.SynthesizeTextToStreamAsync(toSpeak);
_player = new MediaPlayer();
_player.Source = MediaSource.CreateFromStream(syntStream, syntStream.ContentType);
_player.Play();

The SpeechSynthesizer API can specify a specific voice to use for the generation based on voices installed on the system, and it can even use SSML (speech synthesis markup language) to control how the speech is generated, including volume, pronunciation, and pitch.

The entire flow, from invoking the Adventure Works Aide to sending the spoken text to LUIS, and finally responding to the user is handled in the WakeUpAndListen method.

There’s more

Though not used in the current version of the project, there are other APIs that you can take advantage of for your apps, both as part of the UWP platform and as part of Cognitive Services.

For example, on desktop and mobile device, Cortana can recognize speech or text directly from the Cortana canvas and activate your app or initiate an action on behalf of your app. It can also expose actions to the user based on insights about them, and with user permission it can even complete the action for them. Using a Voice Command Definition (VCD) file, developers have the option to add commands directly to the Cortana command set (commands like: “Hey Cortana show adventure in Europe in Adventure Works”). Cortana app integration is also part of our long-term plans for voice support on Xbox, even though it is not supported today. Visit the Cortana portal for more info.

In addition, there are several speech and language related Cognitive Services APIs that are simply too cool not to mention:

  • Custom Recognition Service – Overcomes speech recognition barriers like speaking style, background noise, and vocabulary.
  • Speaker Recognition – Identify individual speakers or use speech as a means of authentication with the Speaker Recognition API.
  • Linguistic Analysis – Simplify complex language concepts and parse text with the Linguistic Analysis API.
  • Translator – Translate speech and text with a simple REST API call.
  • Bing Spell Check – Detect and correct spelling mistakes within your app.

The more personal computing features provided through Cognitive Services is constantly being refreshed, so be sure to check back often to see what new machine learning capabilities have been made available to you.

That’s all folks

This was the last blog post (and sample app) in the App Dev on Xbox series, but if you have a great idea that we should cover, please just let us know, we are always looking for cool app ideas to build and features to implement. Make sure to check out the app source on our official GitHub repository, read through some of the resources provided, read through some of the other blog posts or watch the event if you missed it, and let us know what you think through the comments below or on twitter.

Happy coding!

Resources

Previous Xbox Series Posts

Creating a Custom Ruler with DirectInk


1_InkCanvasRuler_RulerHeader

The Windows 10 Anniversary Update comes with a great set of enhancements to digital ink for both the user and developer. Along with gaining knowledge from general inking resources, you will finish this post knowing how to create a custom ruler with DirectInk.

There has been a lot of excitement and discussion around these features since they were first shown at //build/ 2016. Li-Chen’s “Closer Look at Windows Ink” post from April, and Pete’s recent “The Ink Canvas and Ruler: combining art and technology” both dive into some of those enhancements, including the new Windows Ink Workspace, Sticky Notes and Sketchpad.

For the developer, Pete’s post gets you started in adding ink to your app and you can go on a deeper dive with Scott and Xiao from the Windows Inking team in their //build/ 2016 session. Watch here: “Pen and Ink: Inking at the Speed of Thought

That session included a discussion on how the new and greatly improved InkToolbar control is now a part of the platform, and how the bar for producing an inking experience has been lowered so that with “just 2 lines of markup” you can produce a very usable UI offering three different types of pen.

The snippet below wraps those “2 lines of markup” into a Grid container.

InkCanvasRuler Code1

These pen types (ballpoint, pencil, highlighter) are then customizable in terms of color and size, as this image illustrates.

InkCanvasRuler PenTypes

Those “2 lines of markup” also give you an eraser button, and the InkToolbar opens up its theming, standard controls, and the option for you as a developer to add your own controls.

This combination of InkToolbar and InkCanvas makes getting started with digital inking pretty easy with the Anniversary Update.

The Ruler
InkCanvasRuler Line TheRuler (1)

One of the most innovative and interesting things that the InkToolbar control does by default is to provide a button that enables the ruler, as shown below.

InkCanvasRuler StraightLines (1)

The platform provides for simultaneous touch and pen so that the user can manipulate the ruler with one hand while drawing with the other.  It’s one of those things that has to be tried rather than explained, as there’s a specific feeling that comes from drawing with a virtual ruler for the first time.

Those who have previously used platform pieces like InkCanvas may wonder how this ruler is implemented as it’s new in the Anniversary Update and relies upon some additions to the DirectInk APIs.

One of the big advantages of DirectInk is that the software stack works closely with the hardware so ink can be smoothly captured from the digitizer and presented on the screen.

The documentation for DirectInk talks in terms of “wet” and “dry” ink; the input is processed on a low-latency background thread and rendered “wet” until the ink stroke is completed and picked up by the UI thread to be rendered “dry” onto the InkCanvas. For specialized scenarios where a developer needs complete control over the ink rendering, it is possible to implement “custom drying” and take over full responsibility of rendering dry ink in any way that the application needs to. There are samples of these techniques on GitHub using C# and Win2D and using C++ and Direct2D.

However, what was not previously possible was to add code to affect the drawing strokes a user generated as they were drawing “wet” ink onto the InkCanvas. With the Anniversary Update, that becomes possible and opens the door to custom implementations similar in nature to the ruler.

Custom Rulers – Simulating Graph Paper
InkCanvasRuler Line rCustomRulers (1)

I associate drawing with a ruler with being back at school and using graph paper to make it easy to draw boxes and lines. Let’s illustrate the basics of a custom ruler by using a surface that gives the appearance of graph paper.

With that in mind, I wrote a custom user control called GraphPaperUserControl; it is available from this GraphPaperControl GitHub repository.

This control uses Win2D to tile its display surface with a light blue grid of a size that is controlled by the control’s single property, GridSize. The GitHub repository includes a simple test application which binds the GridSize property to the value of a Slider as shown in the diagram below.

InkCanvasRuler_GraphPaper

I am going to use this control in a separate, Windows 10 Anniversary Update project, so I used Visual Studio and the New Project menu to create a “Blank App (Universal Windows)” in C# and then copied in the XAML and code-behind file for that GraphPaperUserControl and added a reference to the Win2D.uwp NuGet package.

I then made a simple, layered interface in my MainPage.xaml file using this control in combination with an InkCanvas and an InkToolbar, and a simple TextBlock to display one of three modes of operation.

  &lt;Grid
    Background=&quot;{ThemeResource ApplicationPageBackgroundThemeBrush}&quot;&gt;
    &lt;ctrl:GraphPaperUserControl
      xmlns:ctrl=&quot;using:GraphPaperControl.UserControls&quot;
      x:Name=&quot;graphPaper&quot; /&gt;
    &lt;InkCanvas
      x:Name=&quot;inkCanvas&quot;
      ManipulationDelta=&quot;OnInkCanvasManipulationDelta&quot;
      ManipulationMode=&quot;Scale&quot;
      Tapped=&quot;OnInkCanvasTapped&quot; /&gt;
    &lt;InkToolbar
      HorizontalAlignment=&quot;Right&quot;
      VerticalAlignment=&quot;Top&quot;
      TargetInkCanvas=&quot;{Binding ElementName=inkCanvas}&quot; /&gt;
    &lt;TextBlock
      x:Name=&quot;txtMode&quot;
      FontSize=&quot;18&quot;
      HorizontalAlignment=&quot;Left&quot;
      VerticalAlignment=&quot;Bottom&quot;
      Margin=&quot;48&quot; /&gt;
  &lt;/Grid&gt;

Note that the ManipulationDelta event is being handled here and that the ManipulationMode is set to Scale, meaning that the user can perform a “pinch to scale” gesture on the InkCanvas.

The accompanying code-behind below includes the event handler that changes the GridSize on the underlying GraphPaperUserControl in response to this gesture. It also includes a handler for the tapped event that checks for a touch event before toggling between one of three drawing modes – Freeform drawing, Snap to X and Snap to Y.

namespace InkArticleApp
{
  using System;
  using Windows.Devices.Input;
  using Windows.UI.Xaml.Controls;
  using Windows.UI.Xaml.Input;

  public sealed partial class MainPage : Page
  {
    enum Mode
    {
      Freeform = 0,
      SnapX = 1,
      SnapY = 2
    }
    public MainPage()
    {
      this.InitializeComponent();
      this.graphPaper.GridSize = BASE_GRID_SIZE;
      this.currentScaleFactor = 1.0m;
      this.currentMode = Mode.Freeform;
      this.UpdateModeText();
    }
    void UpdateModeText() =&gt; this.txtMode.Text = this.currentMode.ToString();

    void OnInkCanvasManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
    {
      var newScaleFactor = (decimal)e.Delta.Scale * this.currentScaleFactor;

      if ((newScaleFactor &lt;= MAX_SCALE_FACTOR) &amp;&amp; (newScaleFactor &gt;= MIN_SCALE_FACTOR))
      {
        this.currentScaleFactor = newScaleFactor;

        var newGridSize = (int)(this.currentScaleFactor * BASE_GRID_SIZE);

        if (newGridSize != this.graphPaper.GridSize)
        {
          this.graphPaper.GridSize = newGridSize;
        }
      }
    }
    void OnInkCanvasTapped(object sender, TappedRoutedEventArgs e)
    {
      if (e.PointerDeviceType == PointerDeviceType.Touch)
      {
        // Apologies for doing such a horrible thing to an enum.
        this.currentMode = 
          (Mode)((((int)this.currentMode) + 1) % ((int)Mode.SnapY + 1));

        this.UpdateModeText();
      }
    }
    Mode currentMode;
    decimal currentScaleFactor;
    static readonly decimal MAX_SCALE_FACTOR = 8.0m;
    static readonly decimal MIN_SCALE_FACTOR = 0.5m;
    static readonly int BASE_GRID_SIZE = 20;
  }
}

This UI presents graph paper “underneath” the InkCanvas, and the grid drawn on the graph paper can be resized by a pinch gesture and the drawing mode can be toggled by tapping. The screenshot below shows this in use. It’s key to note that handling these touch events does not alter the inking experience.

9_InkCanvasRuler_SmallGrid

10_InkCanvasRuler_SmallGrid

Snapping Ink to Grid Lines
InkCanvasRuler Line rCustomRulers (1)

The remaining piece of work in this small example is to snap “wet” ink to the grid lines as it is being drawn onto the InkCanvas in accordance with the drawing mode that the user has set by tapping on the canvas.

This involves handling new events in the Anniversary Update APIs presented by the class CoreWetStrokeUpdateSource from the namespace Windows.UI.Input.Inking.Core.

The CoreWetStrokeUpdateSource class provides a static factory method for construction; it takes the InkPresenter as its argument and then the returned object fires the set of “wet” ink events representing stroke start, continuing, stop, canceled and completed.

I added a Loaded handler to my page and handled only two of those events in my example.

    /// &lt;summary&gt;
    /// event handler for MainPage.Loaded
    /// &lt;/summary&gt;
    void OnLoaded(object sender, Windows.UI.Xaml.RoutedEventArgs e)
    {
      // Added a member variable of type CoreWetStrokeUpdateSource called 'wetUpdateSource'
      this.wetUpdateSource = CoreWetStrokeUpdateSource.Create(this.inkCanvas.InkPresenter);
      this.wetUpdateSource.WetStrokeStarting += OnStrokeStarting;
      this.wetUpdateSource.WetStrokeContinuing += OnStrokeContinuing;
    }

These events are fired on the dedicated background input processing thread and, naturally, the intention would be to run as little code as possible to keep the inking experience smooth and fluid.

In my scenario, I handle the WetStrokeStarting event by looking at the first ink point produced and then (depending on the current drawing mode) storing the X or Y coordinate that any ink should be snapped to until the next ink stroke begins.

Note that the CoreWetStrokeUpdateEventArgs argument that is passed to the event handler is used here only for the NewInkPoints property but it does contain additional information including the PointerId. Also, the code presented here is more for illustration of the ideas than optimized for performance.

    void OnStrokeStarting(CoreWetStrokeUpdateSource sender, CoreWetStrokeUpdateEventArgs args)
    {
      // I am assuming that we do get a first ink point.
      InkPoint firstPoint = args.NewInkPoints.First();

      // as the stroke is starting, reset our member variables which store
      // which X or Y point we want to snap to.
      this.snapX = this.snapY = null;

      // now decide whether we need to set up a snap point for the X value or
      // one for the Y value.
      if (this.currentMode == Mode.SnapX)
      {
        this.snapX = this.NearestGridSizeMultiple(firstPoint.Position.X);
      }
      else if (this.currentMode == Mode.SnapY)
      {
        this.snapY = this.NearestGridSizeMultiple(firstPoint.Position.Y);
      }
      this.SnapPoints(args.NewInkPoints);
    }
    double? snapX;
    double? snapY;

This handler function makes use of the NearestGridSizeMultiple function to determine the X or Y value of the nearest grid line.

Returning to the earlier discussion around threading, note that the comment in the method below relates to the addition of a new member variable which keeps track of the current grid size of the graph paper even though the GraphPaperUserControl already stores this in its GridSize property.

    double NearestGridSizeMultiple(double value)
    {
      // Note. I have added a new member variable 'currentGridSize' which I keep
      // in sync with the GridSize of the GraphPaperUserControl.
      // This is because this code runs on a non-UI thread so it cannot simply
      // call into that property on the user control which has thread affinity.

      var divisor = value / this.currentGridSize;
      var fractional = divisor - Math.Floor(divisor);

      if (fractional &gt;= 0.5)
      {
        divisor = Math.Ceiling(divisor);
      }
      else
      {
        divisor = Math.Floor(divisor);
      }
      return (divisor * this.currentGridSize);
    }
    int currentGridSize;

Lastly, the member function SnapPoints is invoked on the newly produced “wet” InkPoints so that the original values can be replaced with new values that are identical except their X or Y coordinates are snapped to the nearest grid lines if snap points have been determined.

void SnapPoints(IList&lt;InkPoint&gt; newInkPoints)
    {
      // do we need to do any snapping?
      if (this.currentMode != Mode.Freeform)
      {
        for (int i = 0; i &lt; newInkPoints.Count; i++)
        {
          if (this.snapX.HasValue)
          {
            // replace this point with the same point but with the X value snapped.
            newInkPoints[i] = new InkPoint(
              new Point(this.snapX.Value, newInkPoints[i].Position.Y),
              newInkPoints[i].Pressure);
          }
          else if (this.snapY.HasValue)
          {
            // replace this point with the same point but with the Y value snapped.
            newInkPoints[i] = new InkPoint(
              new Point(newInkPoints[i].Position.X, this.snapY.Value),
              newInkPoints[i].Pressure);
          }
        }
      }
    }

That same member function is invoked from the event handler, which deals with the continuation of the “wet” ink stroke so the ink points produced can also be snapped.

    void OnStrokeContinuing(CoreWetStrokeUpdateSource sender, CoreWetStrokeUpdateEventArgs args)
    {
      this.SnapPoints(args.NewInkPoints);
    }

Those small pieces of code are enough to change the behavior of the ink as it is being drawn, as the screenshots below illustrate.

12_InkCanvasRuler_Freeform

Wrapping Up
InkCanvasRuler Line TheRuler (1)

The full code for the article is present in this InkArticleApp GitHub repository for you to download and experiment with, and use as the basis for your own experiments.

The graph paper example here is a simple one but these new capabilities open up all kinds of scenarios for different shapes and sizes of rulers and stencils and perhaps more complex examples for particular types of diagramming. It’s not hard to imagine tools like Visio implementing custom rulers for diagramming buildings or network diagrams.

No doubt, you have your own scenarios that could benefit from digital ink and we look forward to seeing what you build.

Mike Taulty, Developer Evangelist, Microsoft UK

mtaulty.com
Twitter: @mtaulty

Interested in more capabilities with inking?

Check out this guide to pen and stylus interactions on MSDN: “Pen and stylus interactions in UWP apps

And this recent episode of Channel9’s Context show, focusing on inking with lots of code demos:

4_InkCanvasRuler_ContextScreenshot

Get started with Windows Visual Studio!

The Ink Canvas and Ruler: combining art and technology

Inking_Header

How easy is it to integrate Inking into your app?

As easy as one line of XAML.

In this post, we’re going to walk you through the new Windows 10 Inking capabilities, which are making communication via writing and drawing easier than ever for users. You’ll learn how to implement these new capabilities in your Windows app and how they will improve your users’ experience.

The easiest ways for Universal Windows Platform developers to hook into this capability is through the Ink Canvas. We’ll start out with some examples of this.

The built-in experience

There are many potential applications of Inking. As a starting point, it’s helpful to think of a typical user of Inking apps – for instance, an insurance adjuster who spends more time in the field than in the office. The adjuster is going to want to be able to take notes and make comments around her documents throughout the day.

There are three built-in Windows Ink experiences always available at the tip of your thumb – if you click your pen, the Windows Ink Workspace will appear. Windows Ink Workspace provides access to the Sticky Notes, Sketchpad and Screen sketch apps, which were all built using the XAML Ink Canvas. The Workspace also provides links to recently used and new apps that support Windows Ink.

IMAGE1

Sticky Notes, as you might expect, lets you write reminders to yourself and place them on your desktop. What’s especially cool is that Cortana is integrated with Sticky Notes so it can pull in reminders and put them on your calendar.

This will be extremely helpful for our adjuster, who will constantly be making appointments throughout the day and then – because that’s the nature of the job – changing them again. Real-world sticky notes, written in the car perhaps, are simply going to get lost. These Ink-enabled stickys not only keep all the notes in one place but can digitize the information and integrate it into a workflow with other apps.

IMAGE2

Sketchpad lets you take notes, doodle and free-associate as you would with a regular notepad. It uses a customized Windows Ink toolbar that adds undo, copy and save functionality as well as sharing.

This is probably also going to be a great app for our adjustor. She can simultaneously take notes about the claim while also making helpful diagrams of the house, car or business about which she is collecting information. Best of all, it’s all together in one document rather than spread out between multiple drawing apps and writing apps.

IMAGE3

Finally, screen sketch allows you to take a screenshot and then add redlining, marginal notes and of course the occasional doodle. Because of its simplicity, it can easily be used to take a picture of anything using your device’s camera. Then you can add comments such as the date the picture was taken – for instance, a picture of a whiteboard that you need to save before someone erases it.

And because you’re using a device that you carry with you, you don’t have to get back to your desk to add notes or even to share it with the other attendees of your meeting. There’s even a custom share button on the toolbar that lets you distribute your screen sketch immediately.

IMAGE4

For our insurance adjuster, taking a picture of a whiteboard is probably going to not be that helpful. Being able to take pictures of a dented car or leaking ceiling, however, and then make comments about it based on what she observes around the damage is going to save both time and headache. She can take pictures and make notes all at the same time while the information and the observations are all fresh.

How easy is it to implement?

&lt;InkCanvas x:Name=&quot;MyInkCanvas&quot;&gt;&lt;/InkCanvas&gt;

One line of XAML lights up the capabilities of Windows Ink in your app. The InkCanvas gives you the ability to draw directly in the canvas space. It takes just one more line of XAML to add an inking toolbar.

&lt;InkCanvas x:Name=&quot;MyInkCanvas&quot;&gt;&lt;/InkCanvas&gt;
&lt;InkToolbar TargetInkCanvas=&quot;{x:Bind MyInkCanvas}&quot; VerticalAlignment=&quot;Top&quot; /&gt;

The InkToolbar includes buttons for drawing, erasing, highlighting, and displaying a ruler.

IMAGE5

The InkToolbar is also extensible, allowing developers to add custom tools (InkToolbarCustomPenButton and InkToolbarCustomToolButton) between the highlighter and eraser buttons as well as custom toggle buttons to the right of the ruler.

Pen and touch, art and technology

The ruler really epitomizes what is cool about this technology – but it does so in a subtle way. As with a physical ruler, you can move InkToolbar’s ruler with the fingers of one hand and then start drawing with the pen in your other hand. In both cases, whether you are performing direct manipulations with a pen or your fingers, you can rest your palms on the drawing surface just as you would with pen and paper.

IMAGE6

Making technology appear natural takes a lot of work – being natural ain’t easy – and a lot of things are happening at the same time. First, Windows Ink actively distinguishes between a pen and a finger. Most drawing technologies over the years haven’t been able to do that, either ignoring one or making both stylus and fingers indistinguishable.

Second, Windows Ink distinguishes a palm from pen and touch. The stray palm has been a bugbear of touch interfaces over the years. People naturally rest their palms on surfaces because 1) it is comfortable and 2) we don’t think of our palms as drawing instruments. With Windows Ink, this isn’t a problem any longer.

By removing these artifacts of the underlying technology, Windows Ink removes distractions (and frustrations) and lets the user just get on with her tasks. Because we use our real-life pens and paper to doodle as well as to communicate, this means artistic and work-related tasks are accomplished more easily and with more enjoyment.

Wrapping up

Keyboards are still going to be a part of computing for a long time to come; there are so many things, such as data entry and word processing, that are simply better done with a keyboard and mouse. In many situations, however, Windows Ink is going to be the better tool for the job because it provides more natural and intuitive interaction for people on-the-go who don’t spend most of their working hours sitting at a desk. Because of the ease of implementation, developers will be able to quickly extend their current apps with inking capability or even develop new functionality around it.

If you’d like to read and see more about Windows Ink, here are some articles and videos you will find interesting:

Get started with Windows Visual Studio.