Tag Archives: APIs

MEF releases Sonata and Presto APIs, partners with ONAP

MEF published the Sonata and Presto APIs and said it would work together with the Open Network Automation Platform to determine how automation and orchestration can be managed in software-based environments using MEF APIs as a foundation. The groups made the announcements this week at the SDN NFV World Congress in The Hague, Netherlands.

“MEF and ONAP both see a future where we have services delivered by service providers that span multiple operators, operator domains and technology domains — such as 5G, optical, packet WAN and so on,” said Daniel Bar-Lev, MEF’s director for the office of the CTO.

To operate efficiently, providers need these services to be automated and orchestrated, he added. By aligning their approaches — ONAP from the implementation side and MEF from the conceptual definition side — he said the organizations can provide end-to-end service consistency and avoid market fragmentation.

Keeping silos to a minimum

“But we need to be aware that with so many organizations, players, projects and acronyms, we don’t simply get rid of old silos and create new silos,” Bar-Lev said. “Because if everybody’s doing things their own way, then all we do is create new islands of implementation that will need to be joined up.”

MEF, formerly known as the Metro Ethernet Forum, represents service providers worldwide. ONAP, a Linux Foundation project, focuses on projects designed to help service providers employ automation, virtual functions and other technologies within their operations.

Combined, the two groups have more than 250 members — a number that forms a good portion of the market, according to Bar-Lev. Part of the collaboration includes an agreement between MEF and ONAP  on a defined set of northbound APIs — such as MEF’s previously published LSO Legato APIs — that will be used for any ONAP instantiation, he said.

“When we’ve achieved that, it means whoever uses ONAP can then also take advantage of all the east-west APIs we’re defining, because ONAP doesn’t deal with east-west — it really focuses on a single operational domain,” he said.

The two groups will focus on APIs for now, moving to federated information models and security and policy objectives in the future, Bar-Lev said.

MEF releases LSO Sonata and Presto APIs and SDKs

Earlier this week, MEF released two of its open APIs within its Lifecycle Service Orchestration (LSO) Reference Architecture and Framework. The Sonata and Presto APIs and their corresponding software development kits (SDKs) are now available for use among MEF service-provider members and other associated MEF programs.

The LSO Sonata API will be used to reduce the time it takes for a service provider to check or order connectivity from an operator. Today, that process is often performed manually, and it can take months. Sonata automates how these requests are handled, thereby significantly reducing the time frame, Bar-Lev said.

The LSO Presto API resides within the service provider domain and enables more standardized orchestration for SDN and programmable network environments.

Some larger operators have already implemented machine-to-machine automation, Bar-Lev said, but there is no consensus describing how the required APIs should look. This means those operators need to “reinvent the wheel” for each new part of the process, he added.

MEF service-provider members, including AT&T, Orange and Colt Technology Services, worked with MEF and its LSO framework over a six-month period to determine the best ways to take advantage of the new APIs.

“They reached the consensus and created SDK material that is useful for service-provider IT departments throughout the world to take as starting points to see how they would adapt their back-end system to take advantage of these APIs,” Bar-Lev said.

The LSO Presto API resides within the service provider domain and enables more standardized orchestration for SDN and programmable network environments.

“It takes an abstracted, horizontal layered approach, which means when you orchestrate, it doesn’t matter which technology domain you have or which vendor you have,” he said. “Instead of developing multiple APIs per vendor, per technology domain — which isn’t scalable — you’re able to use these well-defined APIs,” Bar-Lev said.

MEF members have already used the Presto API in implementations with OpenDaylight and Open Network Operating System, he added.

Windows Store: video trailers, improved Store listings, advanced sales, and other new capabilities

At Build 2017, the Windows Store announced the initial availability of several features. Today, I want to share with you that all accounts have access to these features:

  • More ways to promote your apps and drive user acquisition
  • More ways to manage schedules, prices and sales
  • Debug your apps more effectively by using CAB files
  • Use Dev Center through a modern and efficient dashboard experience

Important: If you have a submission in progress, publish it (or delete it) and your next submission will show these new pricing, sales and store listing options. Also, if you use the Windows Store submission API, be sure to read the info at the bottom of this post.

More ways to promote your apps and drive user acquisition

Many of you told us that video trailers are one of the best ways to attract customers. You can now upload up to 15 trailers to use in your Store listing. When using trailers, make sure to also include the 1920 x 1080 pixel image (16:9) in the promotional images section, which shows up after the video stops playing. Feedback from other developers using videos has been very positive, try them out!

Video trailer as shown in the Store – see it in action here on a Windows 10 PC

Creating and updating Store listings used to take many steps per language and could take hours for a submission with listings in many languages. You can now update all Store listing content (description, images, keywords, etc.), by importing and exporting your listings, reducing the update time to just a few minutes.

Export and import of store listings – Submission overview page

More ways to manage prices and sales

 When a customer makes their first purchase, we’ve found that they typically continue to purchase more add-ons in that initial app or game, as well as in other products in the Store. The new pricing and availability page gives you additional options to drive users to that first purchase:

Schedule when your app or game will be visible (as long as the submission happens with enough time to process—we recommend at least three days in advance). You have the option to specify the schedule when your app should become available and discoverable in the Store, as well as a date when it should no longer be available for new acquisitions.

 Schedule availability – pricing and availability page

Schedule price changes in advance. For example, change the base price a month after the app has been published.

Schedule price changes – pricing and availability page

There are many more options to configure sales, including using percentage values (such as “30% off”), viewing sales options in the currency that makes sense to you, configuring sales globally or for specific markets, offering discounts to customers that own one of your other apps (for example “50% off if you own this other game”) and the ability to target a discount to a segment of customers (such as those that have not made any Store purchases so far).

Sales drive purchases, so try them out!

Configure sales – Pricing and availability page

How sales show up in the Store

We also heard that you wanted a more efficient way to understand all prices, for all markets. You can now view all possible price tiers in Excel. Go to the Pricing and availability page, select view table, and you can view and export the table to CSV.

Viewing all price tiers – Pricing and availability page

Use Dev Center through a modern and efficient dashboard experience

The Dev Center dashboard has been redesigned based on your feedback to help you be more productive. It has a clean new interface, beautiful analytics, new account-level pages, integrated app picker and streamlined program switching. These are a few of the things that make the new dashboard more useful, particularly for accounts with multiple apps, games or programs.

Dev Center redesign

Debug your apps more effectively by using CAB files

 We heard a lot of feedback on having access to CAB files to help debugging apps, and improve the quality and performance of your apps and games. The Health report lets you pinpoint which OS and app version configurations generate the most crashes, and provides links to failure details with individual CAB files. These CAB files are only available for customers running any of the Windows Insider flights of Windows 10 (slow or fast), so not all failures will include the CAB download option. 

Access to failure downloads – health analytics page

Implications of these changes if you are using the Windows Store submission API

If you use the Windows Store submission API to manage your apps and games, please be aware of the following:

  • If you manage prices using the Submission API, you’ll have to use new price tiers. To do that, manually update your app or game once, so you can view the new price tiers, accept them, and then update your Submission API code to use these new price tier values, which can be found in the price table on the pricing and availability page in Dev Center, as described above.
  • The Windows Store submission API does not support all the new Store listing capabilities You can add the new assets using the Dev Center dashboard, and the submission API will be updated later in July to let you manage these new assets through the API. More details about the upcoming API capabilities, including trailers and game options, can be found.
  • If you use the StoreBroker PowerShell module to simplify using the Windows Store submission API, you can keep using it to manage the same listing asset types you are managing today. However, you won’t be able to upload the new asset types using StoreBroker until the StoreBroker team publishes an update in a few more weeks, and you pick up that update.

Read this previous blog post to learn about all the recently added Store features, and try all the features that are live today. If you have any issues finding or using these features, please let us know using the feedback link in Dev Center (upper right of the dashboard).

Community Standup with Kevin Gallo

Kevin Gallo will be live on Channel 9 with Seth Juarez on July 26th, 2017 at 9:30am PST. Kevin will be providing updates to the state of the Windows SDK inside Windows 10 Falls Creators Update since everyone last chatted with him at Microsoft Build 2017. As always, we’ll be answering live questions afterwards.

A few of the topics Kevin and Seth will be discussing are the Windows 10 Fall Creators Update SDK, .NET Standard 2.0, Fluent Design, Microsoft Graph with the Activity API and more.

Over time, we’ll hold more frequent community standups to provide additional transparency on what we are building out, and clarity on why we are building them. The community standups will not only be with just Kevin, but the entire Windows development team as well. We’ll be testing different streaming technologies and interaction models to see what works best. We would love feedback on this as well.

Once again, we can’t wait to see you at 9:30am PST on July 26th, 2017 over at https://channel9.msdn.com.

New Lights and PropertySet Interop – XAML and Visual Layer Interop, Part Two

In the last post, we explored using XamlCompositionBrushBase and LoadedImageSurface to create custom CompositionBrushes that can be used to paint XAML elements directly in your markup. In today’s post, we’ll continue our look into the new improvements made to the XAML and Visual Layer Interop APIs available in the Windows 10 Creators Update.

In this blog series, we’ll cover some of these improvements in the Creators Update and look at the following APIs:

  • In Part 1:
    • XamlCompositionBrushBase – easily paint a XAML UIElement with a CompositionBrush
    • LoadedImageSurface – load an image easily and use with Composition APIs
  • In Part 2, today’s post:
    • XamlLights – apply lights to your XAML UI with a single line of XAML
    • PointerPositionPropertySet – create 60 FPS animations using pointer position, off the UI thread!
    • Enabling the Translation property – animate a XAML UI Element using Composition animation

If you’d like to review the previously available ElementCompositionPreview APIs, for example working with “hand-in” and “hand-out” Visuals, you can quickly catch up here.

Lighting UI with XamlLights

A powerful new feature in the Creators Update is the ability to set and use a Lighting effect directly in XAML by leveraging the abstract XamlLight class.

Creating a XamlLight starts just like a XamlCompositionBrushBase does, with an OnConnected and OnDisconnected method (see Part One here), but this time you inherit from the XamlLight subclass to create your own unique lighting that can be used directly in XAML. Microsoft uses this with the Reveal effect that comes with the Creators Update.

To see this in action, let’s build a demo that creates the animated GIF you see at the top of this post. It combines everything you learned about XamlCompositionBrushBase and LoadedImageSurface in the last post, but this example has two XamlLights: a HoverLight and an AmbientLight.

Let’s begin with creating the AmbientLight first. To get started, we begin similarly to the XamlCompositionBrushBase with an OnConnected and OnDisconnected method. However, for a XamlLight we set the CompositionLight property of the XamlLight subclass.


public class AmbLight : XamlLight
{
    protected override void OnConnected(UIElement newElement)
    {
        Compositor compositor = Window.Current.Compositor;

        // Create AmbientLight and set its properties
        AmbientLight ambientLight = compositor.CreateAmbientLight();
        ambientLight.Color = Colors.White;

        // Associate CompositionLight with XamlLight
        CompositionLight = ambientLight;

        // Add UIElement to the Light's Targets
        AmbLight.AddTargetElement(GetId(), newElement);
    }

    protected override void OnDisconnected(UIElement oldElement)
    {
        // Dispose Light when it is removed from the tree
        AmbLight.RemoveTargetElement(GetId(), oldElement);
        CompositionLight.Dispose();
    }

    protected override string GetId() => typeof(AmbLight).FullName;
}

With ambient lighting done, let’s build the SpotLight XamlLight. One of the main things we want the SpotLight to do is follow the user’s pointer. To do this, we can now use GetPointerPositionPropertySet method of ElementCompositionPreview to get a CompositionPropertySet we can use with a Composition ExpressionAnimation (PointerPositionPropertySet is explained in more detail in the PropertySets section below).

Here is the finished XamlLight implementation that creates that animated spotlight. Read the code comments to see the main parts of the effects, particularly how the resting position and animated offset position are used to create the lighting.


public class HoverLight : XamlLight 
{
    private ExpressionAnimation _lightPositionExpression;
    private Vector3KeyFrameAnimation _offsetAnimation;

    protected override void OnConnected(UIElement targetElement)
    {
        Compositor compositor = Window.Current.Compositor;

        // Create SpotLight and set its properties
        SpotLight spotLight = compositor.CreateSpotLight();
        spotLight.InnerConeAngleInDegrees = 50f;
        spotLight.InnerConeColor = Colors.FloralWhite;
        spotLight.OuterConeAngleInDegrees = 0f;
        spotLight.ConstantAttenuation = 1f;
        spotLight.LinearAttenuation = 0.253f;
        spotLight.QuadraticAttenuation = 0.58f;

        // Associate CompositionLight with XamlLight
        this.CompositionLight = spotLight;

        // Define resting position Animation
        Vector3 restingPosition = new Vector3(200, 200, 400);
        CubicBezierEasingFunction cbEasing = compositor.CreateCubicBezierEasingFunction( new Vector2(0.3f, 0.7f), new Vector2(0.9f, 0.5f));
        _offsetAnimation = compositor.CreateVector3KeyFrameAnimation();
        _offsetAnimation.InsertKeyFrame(1, restingPosition, cbEasing);
        _offsetAnimation.Duration = TimeSpan.FromSeconds(0.5f);

        spotLight.Offset = restingPosition;

        // Define expression animation that relates light's offset to pointer position 
        CompositionPropertySet hoverPosition = ElementCompositionPreview.GetPointerPositionPropertySet(targetElement);
        _lightPositionExpression = compositor.CreateExpressionAnimation("Vector3(hover.Position.X, hover.Position.Y, height)");
        _lightPositionExpression.SetReferenceParameter("hover", hoverPosition);
        _lightPositionExpression.SetScalarParameter("height", 15.0f);

        // Configure pointer entered/ exited events
        targetElement.PointerMoved += TargetElement_PointerMoved;
        targetElement.PointerExited += TargetElement_PointerExited;

        // Add UIElement to the Light's Targets
        HoverLight.AddTargetElement(GetId(), targetElement);
    }

    private void MoveToRestingPosition()
    {
        // Start animation on SpotLight's Offset 
        CompositionLight?.StartAnimation("Offset", _offsetAnimation);
    }

    private void TargetElement_PointerMoved(object sender, PointerRoutedEventArgs e)
    {
        if (CompositionLight == null) return;

        // touch input is still UI thread-bound as of the Creators Update
        if (e.Pointer.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Touch)
        {
            Vector2 offset = e.GetCurrentPoint((UIElement)sender).Position.ToVector2();
            (CompositionLight as SpotLight).Offset = new Vector3(offset.X, offset.Y, 15);
        }
        else
        {
            // Get the pointer's current position from the property and bind the SpotLight's X-Y Offset
            CompositionLight.StartAnimation("Offset", _lightPositionExpression);
        }
    }

    private void TargetElement_PointerExited(object sender, PointerRoutedEventArgs e)
    {
        // Move to resting state when pointer leaves targeted UIElement
        MoveToRestingPosition();
    }

    protected override void OnDisconnected(UIElement oldElement)
    {
        // Dispose Light and Composition resources when it is removed from the tree
        HoverLight.RemoveTargetElement(GetId(), oldElement);
        CompositionLight.Dispose();
        _lightPositionExpression.Dispose();
        _offsetAnimation.Dispose();
    }

    protected override string GetId() => typeof(HoverLight).FullName;
}

Now, with the HoverLight class done, we can add both the AmbLight and the HoverLight to previous ImageEffectBrush (find ImageEffectBrush in the last post):


<Grid>
            <Grid.Background>
                <brushes:ImageEffectBrush ImageUriString="ms-appx:///Images/Background.png" />
            </Grid.Background>
 
            <Grid.Lights>
                <lights:HoverLight/>
                <lights:AmbLight/>
            </Grid.Lights>
</Grid>

Note: To add a XamlLight in markup, your Min SDK version must be set to Creators Update, otherwise you can set it in the code behind.

For more information, go here to read more about using XamlLight and here to see the Lighting documentation.

Using CompositionPropertySets

When you want to use the values of the ScrollViewer’s Offset or the Pointer’s X and Y position (e.g. mouse cursor) to do things like animate effects, you can use ElementCompositionPreview to retrieve their PropertySets. This allows you to create amazingly smooth, 60 FPS animations that are not tied to the UI thread. These methods let you get the values from user interaction for things like animations and lighting.

Using ScrollViewerManipulationPropertySet

This PropertySet is useful for animating things like Parallax, Translation and Opacity.


// Gets the manipulation <ScrollViewer x:Name="MyScrollViewer"/>
CompositionPropertySet scrollViewerManipulationPropertySet = 
    ElementCompositionPreview.GetScrollViewerManipulationPropertySet(MyScrollViewer);

To see an example, go to the Smooth Interaction and Motion blog post in this series. There is a section devoted to using the ScrollViewerManipulationPropertySet to drive the animation.

Using PointerPositionPropertySet (new!)

New in the Creators Update, is the PointerPositionPropertySet. This PropertySet is useful for creating animations for lighting and tilt. Like ScrollViewerManipulationPropertySet, PointerPositionPropertySet enables fast, smooth and UI thread independent animations.

A great example of this is the animation mechanism behind Fluent Design’s RevealBrush, where you see lighting effects on the edges on the UIElements. This effect is created by a CompositionLight, which has an Offset property animated by an ExpressionAnimation using the values obtained from the PointerPositionPropertySet.


// Useful for creating an ExpressionAnimation
CompositionPropertySet pointerPositionPropertySet = ElementCompositionPreview.GetPointerPositionPropertySet(targetElement);
ExpressionAnimation expressionAnimation = compositor.CreateExpressionAnimation("Vector3(param.Position.X, param.Position.Y, height)");
expressionAnimation.SetReferenceParameter("param", pointerPositionPropertySet);

To get a better understanding of how you can use this to power animations in your app, let’s explore XamlLights and create a demo that uses the PointerPositionPropertySet to animate a SpotLight.

Enabling Translation Property – Animating a XAML Element’s Offset using Composition Animations

As discussed in our previous blog post, property sharing between the Framework Layer and the Visual Layer used to be tricky prior to the Creators Update. The following Visual properties are shared between UIElements and their backing Visuals:

  • Offset
  • Scale
  • Opacity
  • TransformMatrix
  • InsetClip
  • CompositeMode

Prior to the Creators update, Scale and Offset were especially tricky because, as mentioned before, a UIElement isn’t aware of changes to the property values on the hand-out Visual, even though the hand-out Visual is aware of changes to the UIElement. Consequently, if you change the value of the hand-out Visual’s Offset or Size property and the UIElement’s position changes due to a page resize, the UIElement’s previous position values will stomp all over your hand-out Visual’s values.

Now with the Creators Update, this has become much easier to deal with as you can prevent Scale and Offset stomping by enabling the new Translation property on your element, by way of the ElementCompositionPreview object.


ElementCompositionPreview.SetIsTranslationEnabled(Rectangle1, true);

//Now initialize the value of Translation in the PropertySet to zero for first use to avoid timing issues. This ensures that the property is ready for use immediately.

var rect1VisualPropertySet = ElementCompositionPreview.GetElementVisual(Rectangle1).Properties;
rect1VisualPropertySet.InsertVector3("Translation", Vector3.Zero);

Then, animate the visual’s Translation property where previously you would have animated its Offset property.


// Old way, subject to property stomping:
visual.StartAnimation("Offset.Y", animation);
// New way, available in the Creators Update
visual.StartAnimation("Translation.Y", animation);

By animating a different property from the one affected during layout passes, you avoid any unwanted offset stomping coming from the XAML layer.

Wrapping up

In the past couple posts, we explored some of the new features of XAML and Composition Interop and how using Composition features in your XAML markup is easier than ever. From painting your UIElements with CompositionBrushes and applying lighting, to smooth off-UIThread animations, the power of the Composition API is more accessible than ever.

In the next post, we’ll dive deeper into how you can chain Composition effects to create amazing materials and help drive the evolution of Fluent Design.

Resources

Working with Brushes and Content – XAML and Visual Layer Interop, Part One

The Composition APIs empower Universal Windows Platform (UWP) developers to do beautiful and powerful things when they access the Visual Layer. In the Windows 10 Creators Update, we made working with the Visual Layer much easier with new, powerful APIs.

In this blog series, we’ll cover some of these improvements in the Creators Update and take a look at the following APIs:

  • In Part 1, today’s post:
    • XamlCompositionBrushBase – easily paint a XAML UIElement with a CompositionBrush
    • LoadedImageSurface – load an image easily and use with Composition APIs
  • In Part 2, we’ll look at:
    • XamlLights – apply lights to your XAML UI with a single line of XAML
    • PointerPositionPropertySet – create 60 FPS animations using pointer position, off the UI thread!
    • Enabling the Translation property – animate a XAML UI Element using Composition animation

If you’d like to review the previously available ElementCompositionPreview APIs, for example working with “hand-in” and “hand-out” Visuals, you can quickly catch up here.

Using XamlCompositionBrushBase

One of the benefits of the new Composition and XAML interop APIs is the ability to use a CompositionBrush to directly paint a XAML UIElement rather than being limited to XAML brushes only. For example, you can create a CompositionEffectBrush that applies a tinted blur to the content beneath and use the brush to paint a XAML rectangle that can be included in the XAML markup

This is accomplished by using the new abstract class XamlCompositionBrushBase available in the Creators Update. To use it, you subclass XamlCompositionBrushBase to create your own XAML Brush that can be used in your markup. As seen the example code below, the XamlCompositionBrushBase exposes a CompositionBrush property that you set with your effect (or any CompositionBrush) and it will be applied to the XAML element.

This effectively replaces the need to manually create SpriteVisuals with SetElementChild for most effect scenarios. In addition to needing less code to create and add an effect to the UI, using a Brush means you get the following added benefits for free:

  • Theming and Styling
  • Binding
  • Resource and Lifetime management
  • Layout aware
  • PointerEvents
  • HitTesting and other XAML-based advantages

Microsoft, as part of the Fluent Design System, has included a few Brushes in the Creators Update that leverage the features of XamlCompositionBrushBase:

Building a Custom Composition Brush

Let’s create a XamlCompositionBrush of our own to see how simple this can be.  Here’s what we’ll create:

To start, let’s create a very simple Brush that applies an InvertEffect to content under it. First, we’ll need to make a public sealed class that inherits from XamlCompositionBrushBase and override two methods:

  • OnConnected
  • OnDisconnected

Let’s dive into the code. First, create your Brush class, which inherits from XamlCompositionBrushBase:


public class InvertBrush : XamlCompositionBrushBase
{
    protected override void OnConnected()
    {
        if (CompositionBrush == null)
        {
            // 1 - Get the BackdropBrush, this gets what is behind the UI element
            var backdrop = Window.Current.Compositor.CreateBackdropBrush();

            // CompositionCapabilities: Are effects supported? If not, return.
            if (!CompositionCapabilities.GetForCurrentView().AreEffectsSupported())
            { 
               return;
            }
                
            // 2 - Create your Effect
            // New-up a Win2D InvertEffect and use the BackdropBrush as its Source
            // Note – To use InvertEffect, you'll need to add the Win2D NuGet package to your project (search NuGet for "Win2D.uwp")
            var invertEffect = new InvertEffect
            {
                Source = new CompositionEffectSourceParameter("backdrop")
            };

            // 3 - Set up the EffectFactory
            var effectFactory = Window.Current.Compositor.CreateEffectFactory(invertEffect);

            // 4 - Finally, instantiate the CompositionEffectBrush
            var effectBrush = effectFactory.CreateBrush();

            // and set the backdrop as the original source 
            effectBrush.SetSourceParameter("backdrop", backdrop);

            // 5 - Finally, assign your CompositionEffectBrush to the XCBB's CompositionBrush property
            CompositionBrush = effectBrush;
         }
    }

    protected override void OnDisconnected()
    {
        // Clean up
        CompositionBrush?.Dispose();
        CompositionBrush = null;
    }
}

There are a few things to call out in the code above.

  • In the OnConnected method, we get a CompositionBackdropBrush. This allows you to easily get the pixels behind the UIElement.
  • We use fallback protection. If the user’s device doesn’t have support for the effect(s), then just return.
  • Next, we create the InvertEffect and use the backdropBrush for the Effect’s Source.
  • Then, we pass the finished InvertEffect to the CompositionEffectFactory.
  • Finally, we get an EffectBrush from the factory and set the XamlCompositionBrushBase.CompositionBrush property with our newly created effectBrush.

Now you can use it in your XAML. For example, let’s apply it to a Grid on top of another Grid with a background image:


<Grid>
    <Grid.Background>
        <ImageBrush ImageSource="ms-appx:///Images/Background.png"/>
    </Grid.Background>
    <Grid Width="300" Height="300" 
               HorizontalAlignment="Center"
               VerticalAlignment="Center">
        <Grid.Background>
            <brushes:InvertBrush />
        </Grid.Background>
    </Grid>
</Grid>

Now that you know the basics of creating a brush, let’s build an animated effect brush next.

Creating a Brush with Animating Effects

Now that you see how simple it is to create a CompositionBrush, let’s create a brush that applies a TemeratureAndTint effect to an image and animate the Temperature value:

We start the same way we did with the simple InvertBrush, but this time we’ll add a DependencyProperty,  ImageUriString, so that we can load an image using LoadedImageSurface in the OnConnected method.


public sealed class ImageEffectBrush : XamlCompositionBrushBase
    {
        private LoadedImageSurface _surface;
        private CompositionSurfaceBrush _surfaceBrush;
 
        public static readonly DependencyProperty ImageUriStringProperty = DependencyProperty.Register(
            "ImageUri",
            typeof(string),
            typeof(ImageEffectBrush),
            new PropertyMetadata(string.Empty, OnImageUriStringChanged)
        );
 
        public string ImageUriString
        {
            get => (String)GetValue(ImageUriStringProperty);
            set => SetValue(ImageUriStringProperty, value);
        }
 
        private static void OnImageUriStringChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        {
            var brush = (ImageEffectBrush)d;
            // Unbox and update surface if CompositionBrush exists     
            if (brush._surfaceBrush != null)
            {
                var newSurface = LoadedImageSurface.StartLoadFromUri(new Uri((String)e.NewValue));
                brush._surface = newSurface;
                brush._surfaceBrush.Surface = newSurface;
            }
        }
 
        protected override void OnConnected()
        {
            // return if Uri String is null or empty
            if (string.IsNullOrEmpty(ImageUriString))
                return;
 
            // Get a reference to the Compositor
            Compositor compositor = Window.Current.Compositor;
 
            // Use LoadedImageSurface API to get ICompositionSurface from image uri provided
            _surface = LoadedImageSurface.StartLoadFromUri(new Uri(ImageUriString));
 
            // Load Surface onto SurfaceBrush
            _surfaceBrush = compositor.CreateSurfaceBrush(_surface);
            _surfaceBrush.Stretch = CompositionStretch.UniformToFill;
 
            // CompositionCapabilities: Are Tint+Temperature and Saturation supported?
            bool usingFallback = !CompositionCapabilities.GetForCurrentView().AreEffectsSupported();
            if (usingFallback)
            {
                // If Effects are not supported, Fallback to image without effects
                CompositionBrush = _surfaceBrush;
                return;
            }
 
            // Define Effect graph (add the Win2D.uwp NuGet package to get this effect)
            IGraphicsEffect graphicsEffect = new SaturationEffect
            {
                Name = "Saturation",
                Saturation = 0.3f,
                Source = new TemperatureAndTintEffect
                {
                    Name = "TempAndTint",
                    Temperature = 0,
                    Source = new CompositionEffectSourceParameter("Surface"),
                }
            };
 
            // Create EffectFactory and EffectBrush 
            CompositionEffectFactory effectFactory = compositor.CreateEffectFactory(graphicsEffect, new[] { "TempAndTint.Temperature" });
            CompositionEffectBrush effectBrush = effectFactory.CreateBrush();
            effectBrush.SetSourceParameter("Surface", _surfaceBrush);
 
            // Set EffectBrush to paint Xaml UIElement
            CompositionBrush = effectBrush;
 
            // Trivial looping animation to demonstrate animated effect
            ScalarKeyFrameAnimation tempAnim = compositor.CreateScalarKeyFrameAnimation();
            tempAnim.InsertKeyFrame(0, 0);
            tempAnim.InsertKeyFrame(0.5f, 1f);
            tempAnim.InsertKeyFrame(1, 0);
            tempAnim.Duration = TimeSpan.FromSeconds(5);
            tempAnim.IterationBehavior = AnimationIterationBehavior.Count;
      tempAnim.IterationCount = 10;
             effectBrush.Properties.StartAnimation("TempAndTint.Temperature", tempAnim);
        }
 
        protected override void OnDisconnected()
        {
            // Dispose Surface and CompositionBrushes if XamlCompBrushBase is removed from tree
            _surface?.Dispose();
            _surface = null;
 
            CompositionBrush?.Dispose();
            CompositionBrush = null;
        }
    }

There are some new things here to call out that are different from the InvertBrush:

  • We use the new LoadedImageSurface API to easily load an image in the OnConnected method, but also when the ImageUriString value changes. Prior to Creators Update, this required a hand-in Visual (a SpriteVisual, painted with an EffectBrush, which was handed back into the XAML Visual Tree). See the LoadedImageSurface section later in this article for more details.
  • Notice that we gave the effects a Name value. In particular, TemperatureAndTintEffect, uses the name “TempAndTint.” This is required to animate properties as it is used for the reference to the effect in the AnimatableProperties array that is passed to the effect factory. Otherwise, you’ll encounter a “Malformed animated property name” error.
  • After we assign the CompositionBrush property, we created a simple looping animation to oscillate the value of the TempAndTint from 0 to 1 and back every 5 seconds.

Let’s take a look at an instance of this Brush in markup:


<Grid>
            <Grid.Background>
                <brushes:ImageEffectBrush ImageUriString="ms-appx:///Images/Background.png"/>
            </Grid.Background>
</Grid>

For more information on using XamlCompositionBrushBase, see here. Now, let’s take a closer look at how easy it is now to bring in images to the Visual layer using LoadedImageSurface

Loading images with LoadedImageSurface

With the new LoadedImageSurface class, it’s never been easier to load an image and work with it in the visual layer. The class has the same codec support that Windows 10 has via the Windows Imaging Component (see full list here), thus it supports the following image file types:

  • Joint Photographic Experts Group (JPEG)
  • Portable Network Graphics (PNG)
  • Bitmap (BMP)
  • Graphics Interchange Format (GIF)
  • Tagged Image File Format (TIFF)
  • JPEG XR
  • Icons (ICO)

NOTE: When using an animated GIF, only the first frame will be used for the Visual, as animation is not supported in this scenario.

To load in an image, you can use one of the four factory methods:

As you can see there are two ways to load an image: with a Uri or a Stream. Additionally, you have an option to use an overload to set the size of the image (if you don’t pass in a Size, it will decode to the natural size).


CompositionSurfaceBrush imageBrush = compositor.CreateSurfaceBrush();
LoadedImageSurface loadedSurface = LoadedImageSurface.StartLoadFromUri(new Uri("ms-appx:///Images/Photo.jpg"), new Size(200.0, 200.0));
imageBrush.Surface = loadedSurface;

This is very helpful when you need to load an image that will be used for your CompositionBrush (e.g. CompositionEffectBrush) or SceneLightingEffect (e.g. NormalMap for textures) as you no longer need to manually create a hand-in Visual (a SpriteVisual painted with an EffectBrush). In an upcoming post in this series, we will explore this further using NormalMap images with to create advanced lighting to create unique and compelling materials.

Using LoadedImageSurface with a Composition Island

LoadedImageSurface is also useful when loading an image onto a SpriteVisual inserted in XAML UI using ElementCompositionPreview. For this scenario, you can use the Loaded event to adjust the visual’s properties after the image has finished loading.

Here is an example of using LoadedImageSurface for a CompositionSurfaceBrush, then updating the SpriteVisual’s size with the image’s DecodedSize when the image is loaded:


private SpriteVisual spriteVisual;
private void LoadImage(Uri imageUri)
{
    CompositionSurfaceBrush surfaceBrush = Window.Current.Compositor.CreateSurfaceBrush();

    // You can load an image directly and set a SurfaceBrush's Surface property with it
    var loadedImageSurface = LoadedImageSurface.StartLoadFromUri(imageUri);
    loadedImageSurface.LoadCompleted += Load_Completed;
    surfaceBrush.Surface = loadedImageSurface;

    // We'll use a SpriteVisual for the hand-in visual
    spriteVisual = Window.Current.Compositor.CreateSpriteVisual();
    spriteVisual.Brush = surfaceBrush;

    ElementCompositionPreview.SetElementChildVisual(MyCanvas, spriteVisual);
}

private void Load_Completed(LoadedImageSurface sender, LoadedImageSourceLoadCompletedEventArgs args)
{
    if (args.Status == LoadedImageSourceLoadStatus.Success)
    {
        Size decodedSize = sender.DecodedSize;
        spriteVisual.Size = new Vector2((float)decodedSize.Width, (float)decodedSize.Height);
    }
}

There are some things you should be aware before getting started with LoadedImageSurface. This class makes working with images a lot easier, however you should understand the lifecycle and when images get decoded/sized. We recommend that you take a couple minutes and read the documentation before getting started.

Wrapping up

Using Composition features in your XAML markup is easier than ever. From painting your UIElements with CompositionBrushes and applying lighting, to smooth off-UIThread animations, the power of the Composition API is more accessible than ever.

In the next post, we’ll explore more new APIs like the new Translation property, using XamlLights in your XAML markup and how to create a custom light using the new PointerPositionPropertySet.

Resources

Announcing UWP Community Toolkit 1.5

Today marks the sixth release of the UWP Community Toolkit – all packages are updated to version 1.5. Thanks to the UWP developer community, the UWP Community Toolkit has seen great improvements and stability to the existing controls and services. The community partnership has led to several new additions to this release.

To highlight a few of the new additions, the UWP Community Toolkit now includes:

  1. Menu: A classic control used by traditional desktop applications, adapted for the Universal Windows Platform. As requested by the community on UserVoice, the Menu allows the developer to provide a hierarchical list of menus and submenus that support any input modality and can adapt to the screen size to provide a natural and fluid interaction.
  2. OrbitView: A new ItemsControl that arranges elements around a center element and provides flexibility for size and distance for each element, as well as the ability to show orbits or anchors for each item.
  3. RadialProgressBar: A XAML Control that displays a value within a range using a circular sector that grows clockwise until it becomes a full ring. A fantastic variation of the ProgressBar.
  4. RoundImageEx: Similar to the ImageEx control, the RoundImageEx control downloads and locally caches images asynchronously while showing loading indicators. In addition, the RoundImageEx allows images to be clipped as circles.
  5. ExpressionBuilder: A type-safe way to build powerful composition ExpressionAnimation.
  6. BluetoothLEHelper: The BluetoothLEHelper class provides functionality to easily enumerate, connect to and interact with Bluetooth LE Peripherals.

For a complete overview of what’s new in version 1.5, please read our release notes on GitHub.

The release of Windows 10 Creators Update (build 10.0.15063) has enabled several new APIs that make it possible to improve several controls used in apps targeting the latest update. Therefore, several packages are now targeting the Windows 10 Creators Update and can take advantage of these new APIs. We encourage all developers using the toolkit to update their apps to the latest version of the UWP Community Toolkit.

As a reminder, the UWP Community Toolkit can be used in any UWP app across PC, Xbox One, mobile, HoloLens and Surface Hub devices. You can get started by following this tutorial, or preview the latest features by installing the UWP Community Toolkit Sample App from the Windows Store.

If you would like to contribute, please join us on GitHub!

UWP App Diagnostics

At Build this year, we gave a sneak preview of a set of new APIs designed to provide diagnostic information about running apps. You can see the videos here and here – but note that these were based on a pre-release implementation. So, while the Build videos are still correct on broad functionality, the final API names are almost all slightly different. Plus, we added a couple of extra features after Build.

The final versions for the upcoming release are available in the Insider builds from Build 16226, along with the corresponding SDK.

At a high level, these APIs allow an app to:

  • Enumerate a list of running apps, including UWP apps, Win32 apps, system services and so on.
  • For each app, get process-specific metrics on:
    • Memory usage (private commit and working set).
    • CPU usage.
    • Disk reads and writes.
  • For each UWP app, get additional metrics on:
    • Memory usage (including shared commit) equivalent to the Windows.System.MemoryManager report previously available to an app for its own usage.
    • State info: running, suspending, suspended, not running.
    • Energy quota info: under or over.
    • Enumerate a list of any background tasks that are active for the app, including name, trigger type and entry point.
    • Enumerate all the processes for the app (using an enhancement to the existing Windows.System.Diagnostics.ProcessDiagnosticInfo class that was previously restricted to an app for its own usage).

The API has a simple hierarchical structure:

  • The AppDiagnosticInfo type represents a single app. Callers would normally request either a single AppDiagnosticInfo for the app you’re interested in or a list of AppDiagnosticInfos if you’re interested in multiple apps.

  • Once you’ve gotten hold of an AppDiagnosticInfo for an app you’re interested in, you’d call GetResourceGroups to get a list of AppResourceGroupInfo objects. Each AppResourceGroupInfo corresponds to a resource group. An app can define resource groups in its manifest as a way to organize its components (foreground app, background tasks) into groups for resource management purposes. If you don’t define any explicit resource groups, the system will provide at least one (for the foreground app) plus potentially more (if you have out-of-proc background tasks, for example).

  • From there, you’d call any of the AppResourceGroupInfo methods to get snapshot reports of memory usage, execution and energy quota state, and the app’s running background tasks (if any) via the AppResourceGroupMemoryReport, AppResourceGroupStateReport and AppResourceGroupBackgroundTaskReport classes.

  • And finally, each group exposes a list of ProcessDiagnosticInfo objects.

As you can see from the class diagrams, the AppDiagnosticInfo and ProcessDiagnosticInfo each have a link to the other. This means you can get all the rich process-specific info for any running process and get the UWP-specific info for any process related to a UWP app (including Desktop Bridge apps).

These APIs are intended to support app developers who either need more diagnostic support during their own app development and testing, or who want to build a general-purpose diagnostic app and publish it in the Windows Store. Exposing information about other apps raises potential privacy concerns, so if your app uses these APIs, you’ll need to declare the appDiagnostics capability in your manifest, along with the corresponding namespace declaration:


<Package
  xmlns:rescap="http://schemas.microsoft.com/appx/manifest/foundation/ windows10/restrictedcapabilities"
  IgnorableNamespaces="uap mp rescap">
  ...

  <Capabilities>
    <rescap:Capability Name="appDiagnostics" />
  </Capabilities>
</Package>

This is a restricted capability: If you submit an app with this capability to the Windows Store, this will trigger closer scrutiny. The app must be in the Developer Tools category, and we will examine your app to make sure that it is indeed a developer tool before approving the submission.

At run time, the capability also triggers a user-consent prompt the first time any of the diagnostic APIs are called:

The user is always in control: If permission is denied, then the APIs will only return information about the current app. The prompt is only shown on first use, but the user can change his or her mind any time via the privacy pages in Settings. All apps that use the APIs will be listed here, and the user can toggle permission either globally or on a per-app basis:

Given the richness of the APIs, it’s not too much of a stretch to envisage creating a UWP version of Task Manager. There are a few features that we can’t implement just yet (terminating apps and controlling system services, for example), but certainly most of the data reporting is perfectly possible with the new APIs:

The first thing to do is to request permission to access diagnostics for other apps using AppDiagnosticInfo.RequestAccessAsync. The result could be Denied, Limited (which means you can only get information for the current app package) or Allowed.


DiagnosticAccessStatus diagnosticAccessStatus = 
    await AppDiagnosticInfo.RequestAccessAsync();
switch (diagnosticAccessStatus)
{
    case DiagnosticAccessStatus.Allowed:
        Debug.WriteLine("We can get diagnostics for all apps.");
        break;
    case DiagnosticAccessStatus.Limited:
        Debug.WriteLine("We can only get diagnostics for this app package.");
        break;
}

Then, to emulate Task Manager, you’d start with a list of the ProcessDiagnosticInfo objects for all running processes.


IReadOnlyList<ProcessDiagnosticInfo> processes = ProcessDiagnosticInfo.GetForProcesses();

For each running process, you can extract the top-level process-specific information such as the ExecutableFileName and the ProcessId. You can also get the more detailed process information from each of the three reports for CpuUsage, MemoryUsage and DiskUsage.


if (processes != null)
{
    foreach (ProcessDiagnosticInfo process in processes)
    {
        string exeName = process.ExecutableFileName;
        string pid = process.ProcessId.ToString();

        ProcessCpuUsageReport cpuReport = process.CpuUsage.GetReport();
        TimeSpan userCpu = cpuReport.UserTime;
        TimeSpan kernelCpu = cpuReport.KernelTime;

        ProcessMemoryUsageReport memReport = process.MemoryUsage.GetReport();
        ulong npp = memReport.NonPagedPoolSizeInBytes;
        ulong pp = memReport.PagedPoolSizeInBytes;
        ulong peakNpp = memReport.PeakNonPagedPoolSizeInBytes;
        //...etc

        ProcessDiskUsageReport diskReport = process.DiskUsage.GetReport();
        long bytesRead = diskReport.BytesReadCount;
        long bytesWritten = diskReport.BytesWrittenCount;
        //...etc
    }
}

For any process associated with a UWP app, the IsPackaged property is true. So, for each of these, you can get from the ProcessDiagnosticInfo to the AppDiagnosticInfo. It might seem strange that we can get AppDiagnosticInfos (plural) from a process – but this is to allow for the possibility that a single process is associated with more than one app. That’s an extremely uncommon scenario, but it is possible in the case of VoIP apps where two or more apps in the same package can share a component running in a separate process at run time. In almost all cases, though, there will only be one AppDiagnosticInfo per process.


if (process.IsPackaged)
{
    IList<AppDiagnosticInfo> diagnosticInfos = process.GetAppDiagnosticInfos();
    if (diagnosticInfos != null && diagnosticInfos.Count > 0)
    {
        AppDiagnosticInfo diagnosticInfo = diagnosticInfos.FirstOrDefault();
        if (diagnosticInfo != null)
        {
            IList<AppResourceGroupInfo> groups = diagnosticInfo.GetResourceGroups();
            if (groups != null && groups.Count > 0)
            {

From the AppDiagnosticInfo, you can walk down the hierarchy and get a collection of AppResourceGroupInfos. Then, for each AppResourceGroupInfo, you can get the UWP-specific state and memory information:


AppResourceGroupInfo group = groups.FirstOrDefault();
if (group != null)
{
    string name = diagnosticInfo.AppInfo.DisplayInfo.DisplayName;
    string description = diagnosticInfo.AppInfo.DisplayInfo.Description;
    BitmapImage bitmapImage = await GetLogoAsync(diagnosticInfo);

    AppResourceGroupStateReport stateReport= group.GetStateReport();
    if (stateReport != null)
    {
        string executionStatus = stateReport.ExecutionState.ToString();
        string energyStatus = stateReport.EnergyQuotaState.ToString();
    }

    AppResourceGroupMemoryReport memoryReport = group.GetMemoryReport();
    if (memoryReport != null)
    {
        AppMemoryUsageLevel level = memoryReport.CommitUsageLevel;
        ulong limit = memoryReport.CommitUsageLimit;
        ulong totalCommit = memoryReport.TotalCommitUsage;
        ulong privateCommit = memoryReport.PrivateCommitUsage;
        ulong sharedCommit = totalCommit - privateCommit;
    }
}

Note: to get the packaged logo from the app, there’s a little extra work. You call GetLogo from the AppDisplayInfo to return the data as a stream; if there are multiple logos available, this will return the largest one that is within the specified size.


private async Task<BitmapImage> GetLogoAsync(AppDiagnosticInfo app)
{
    RandomAccessStreamReference stream = 
        app.AppInfo.DisplayInfo.GetLogo(new Size(64, 64));
    IRandomAccessStreamWithContentType content = await stream.OpenReadAsync();
    BitmapImage bitmapImage = new BitmapImage();
    await bitmapImage.SetSourceAsync(content);
    return bitmapImage;
}

Once you’ve collected all the various detailed metrics you’re interested in, it’s a simple matter to populate your viewmodel for data-binding purposes, to perform data analytics or to do whatever other processing you might want.

In a later post, we’ll look at how you can integrate the diagnostic APIs with existing developer tools such as Visual Studio and Appium.

Smooth as Butter Animations in the Visual Layer with the Windows 10 Creators Update

The Windows 10 Creators Update marks the third major release of the Windows UI platform APIs. With each release, an attempt is frequently made to simplify features introduced in prior releases. This encourages Universal Windows Platform (UWP) developers to standardize on these features. The new hide and show implicit animations are an example of this.

At the same time, too much standardization can potentially lead to conformity, so with each new release more powerful visual features like the new custom animations are also added, which allow developers who are willing and able to dive into them to customize their user interfaces and stand out from the crowd. This inherent tension between ease of use and the power to customize rewards developers for their efforts while also making sure that no one gets left behind.

Hide and show animations for page transitions

Page transitions, often accompanied by state transitions as visual elements, are added to the visual tree of the new page. In fact, a lot of interactivity in UWP simply involves deciding which content to show and which content to hide as the state of the app changes. More often than not, this is tied to changing the value of the Visibility properties of the elements on the screen.

In the Creators Update, two new implicit animation techniques have been added to help you make these transitions more fluid: ElementCompositionPreview.SetImplicitShowAnimation and ElementCompositionPreview.SetImplicitHideAnimation. Whenever a UIElement is loaded or when that element’s Visibility property is set to Visible, the implicit animation associated with it using SetImplicitShowAnimation will play. Similarly, whenever the user navigates away from a page or when a UIElement is hidden, an animation associated with it using the SetImplicitHideAnimation method will be invoked. These two mechanisms make it easier for you to include motion as an inherent aspect of all your visual elements, while providing a seamless experience for your users.

Connected animations

Implicit animations are great for animating controls inside a page. For navigation transitions between pages, however, the Visual Layer provides a different mechanism known as connected animations to help you make your UI even sweeter. Connected animations help the user stay oriented when she is performing common tasks such as context switching from a list of items to a details page.

The Windows UI platform APIs provide a class named the ConnectedAnimationService to coordinate animations between the source page and the destination page during navigation. You access the service by calling the static GetForCurrentView method. Then in the source page, you invoke PrepareToAnimate, passing in a unique key and the image that should be used for the transition animation.


ConnectedAnimationService.GetForCurrentView().PrepareToAnimate("MyUniqueId", image);

In the destination page, you retrieve the image from your ConnectedAnimationService service and invoke TryStart on the ConnectedAnimation while passing in the destination UIElement.


var animation = ConnectedAnimationService.GetForCurrentView().GetAnimation("MyUniqueId");
if (animation != null)
{
    animation.TryStart(DestinationImage);
};

In the Anniversary Update you did not have much control over this animation technique. Everyone got pretty much the same standard one. With the Creators Update, on the other hand, you have lots of new superpowers to personalize your transitions with:

  • Coordinated animations
  • Custom animations
  • Better image animations

Just to reiterate the point made in the introduction, the goal in designing the Windows UI platform APIs is to provide an awesome experience out of the box so you can copy the standard samples and get beautiful, fast and visually appealing visuals. At the same time, this shouldn’t ever take away from your ability to personalize the user experience to create something truly unique and wonderful with powerful new tools, like coordinated animations and custom animations.

Coordinated animations

A coordinated animation is a type of animation that appears alongside your connected animation and which works in coordination with your connected animation target. A coordinated animation gives extra visual flair to your page transition.

In the coordinated animation sample above, caption text that is not present in the source page is added to the destination page. The caption text is animated in tandem with the connected animation. We are doing two things here (in designer terms): providing context between the source and the destination using our connected animation while also adding visual interest with a coordinated animation at the destination. In user experience terms, though, all we’re doing is making the app’s transition animations look really cool.

Coordinated animations are fortunately also easy to implement. The TryStart method of the ConnectedAnimation class provides an override that allows you to pop in an array of visual elements you want to animate in a coordinated fashion. Let’s say that your caption text is in a visual element that you’ve named “DescriptionRoot.” You can add this as a coordinated animation by tweaking the previous code like so:


var animation = ConnectedAnimationService.GetForCurrentView().GetAnimation("MyUniqueId");
if (animation != null)
{
    animation.TryStart(DestinationImage, new UIElement[] { DescriptionRoot });
};

That’s a lot of power packed into a little argument.

Custom animations

By default, the connected animations in the navigation sample move in a straight line from the origin position in the source page to the target position in the destination page. If you select a box in the far-left column, it will move more or less straight up, while if you select a box in the top row, it will more or less move directly left to get to that target position. But what if you could put some English on this?

You can with custom animations, introduced in the Creators Update. The custom animations feature lets you modulate your transitions in four ways:

  • Crossfade – Lets you customize how elements crossfade as source element reaches destination
  • OffsetX – Lets you customize the X channel of Offset
  • OffsetY – Lets you customize the Y channel of Offset
  • Scale – Lets you customize scale of the element as it animates

In order to customize a particular part of a connected animation, you will need to create a keyframe animation and add it to your page transition using the SetAnimationComponent call like so:


var animation = ConnectedAnimationService.GetForCurrentView().GetAnimation("MyUniqueId");

var customXAnimation = Window.Compositor.CreateScalarKeyFrameAnimation();
customXAnimation.Duration = ConnectedAnimationService.GetForCurrentView().DefaultDuration;
customXAnimation.InsertExpressionKeyFrame(0.0f, "StartingValue");
customXAnimation.InsertExpressionKeyFrame(0.5f, "FinalValue + 25");
customXAnimation.InsertExpressionKeyFrame(1.0f, "FinalValue");

animation.SetAnimationComponent(ConnectedAnimationComponent.OffsetX, customXAnimation);

Note that you use expressions to get the starting and ending values of the connected animation.

Awesome image animations

The Creators Update also introduces improved image interpolation for connected animations where the image size and even the relative dimensions are changing between the source and the destination—for instance transitioning from a square to a rectangular image.

This interpolation happens automagically so you have less to worry about.

Implicit animation support for property sets and shadows

Finally, animation capabilities are also extended in the Creators Update by allowing you to apply implicit animations to property sets and shadows.

This change provides developers with even more creative flexibility and the ability to modify shadows in interesting new ways, as shown in the code sample below.


var shadowBlurAnimation = compositor.CreateScalarKeyFrameAnimation();
shadowBlurAnimation.InsertExpressionKeyFrame(1.0f, "this.FinalValue");
shadowBlurAnimation.Duration = TimeSpan.FromSeconds(1);
shadowBlurAnimation.Target = "BlurRadius";

//Associating animations with triggers 
implicitAnimationShadow["BlurRadius"] = shadowBlurAnimation;
implicitAnimationShadow["Opacity"] = shadowOpacityAnimation;
implicitAnimationShadow["Scale"] = shadowScaleAnimation;

implicitAnimationVisual["Translation"] = translationAnimation;
            

//Applying Implicit Animations to objects 
content.Properties.ImplicitAnimations = implicitAnimationVisual;
shadow.DropShadow.ImplicitAnimations = implicitAnimationShadow;

Wrapping up

The visual power being made available to developers through the Windows UI platform APIs have basically always been a part of the UI Framework. They just haven’t always been accessible until now. Think of this as a UI nuclear reactor being handed over to you to play with. With this awesome power, however, also comes the responsibility to create sweet UI and beautiful interactions. Go forth and be amazing.

To learn more about the topics covered in this post, you are encouraged to voraciously consume the following articles and videos:

Using color fonts for beautiful text and icons

In this post, we’ll introduce you to a text technology called color fonts. We’ll discuss what color fonts are, when they can be useful and how to use them in your Windows 10 apps.

What are color fonts?

Color fonts, also referred to as “multicolor fonts” or “chromatic fonts,” are a relatively new font technology that allows font designers to use multiple colors within each glyph of the font. Color fonts allow apps and websites to draw multicolored text with less code and more robust operating system support than ad-hoc techniques implemented above the text stack.

Most fonts for reading and writing—the fonts you are probably most familiar with—are not color fonts. These fonts define only the shape of the glyphs they contain, either with vector outlines or monochromatic bitmaps. At draw time, a text renderer fills the glyph shape using a single color (the “font color”) specified by the app or document being rendered.

Color fonts, on the other hand, contain color information in addition to shape information. Some approaches even allow fonts to include multiple color palettes, giving the font artistic flexibility. Color fonts typically include fallback information for platforms that do not support color fonts or for scenarios in which color functionality has been disabled. In those situations, color fonts are rendered as normal monochromatic fonts.

One color font you may be familiar with is Segoe UI Emoji—the default font used in Windows to display emoji. Below, you can see an example of a glyph from Segoe UI Emoji rendered in monochrome (left) and in color (right).

Why use color fonts?

Now that you know what color fonts are, let’s talk about how they can be useful.

Color fonts were originally designed to enable multicolored emoji in text communication scenarios. They excel at that task, but they are useful for other scenarios as well. Color fonts offer a way to implement rich text effects with the simplicity and functionality of regular fonts. To apps and the operating system, text rendered in a color font is the same as any other text: It can be copied and pasted, parsed by accessibility tools and so on.

Color fonts are a better alternative to raster graphics for rich text scenarios like website headers or document section titles. Although raster graphics are commonly used in these scenarios, they do not scale well to all display sizes, nor do they provide the same accessibility features as real text. If you find yourself frequently generating raster images of text from multicolored artwork, consider using a color font instead.

Color fonts can also be used for your app’s iconography. Some app developers prefer using icon fonts to standalone image files, due to the convenience and layout functionality offered by fonts. With color fonts, you can pack rich, scalable, full-color icons into a single icon font.

(Note: Starting in Windows 10 Creators Update, you can also achieve scalable vector iconography by using standalone SVG images directly in your XAML app. For more information, see Vector iconography: Using SVG images in your app.)

What kinds of color fonts does Windows support?

The OpenType specification defines several ways to embed color information in a font. Starting in Windows 10 Anniversary Update, Windows supports all of these approaches. The different approaches are summarized below.

Vector-based color fonts define glyph shapes using mathematical curves and lines. They may use the traditional font outline syntax coupled with color palettes (via OpenType’s ‘COLR’ and ‘CPAL’ tables), or they may use embedded SVG assets (via OpenType’s ‘SVG ’ table). These formats excel at representing most iconography compactly, and as vectors, they offer infinite scalability.

Bitmap-based color fonts define glyph shapes using embedded raster graphics, such as PNG images. They may use OpenType’s ‘CBDT’ and ‘CBLC’ tables, or they may use OpenType’s ‘sbix’ table. This approach makes it straightforward to control every pixel of a glyph’s shape and provide photorealistic content, but designers must provide multiple image sizes to ensure high-quality visual scaling.

Using color fonts

From both the developer’s perspective and the user’s perspective, color fonts are “just fonts.” They can be installed and uninstalled from the system in the same way as monochromatic fonts, they can be included in your app package as a local asset, or they can be used as a web font by your website.

In the XAML and Microsoft Edge frameworks, you can style just about any text with a color font in the same way as a regular font, and by default, your text will be rendered in color. However, if your app operates at a lower level and calls Direct2D APIs (or Win2D APIs) to render its text, then it must explicitly request color font rendering.

Using color fonts in XAML

The XAML platform’s text elements (like TextBlock, TextBox, RichEditBox, and FontIcon) support color fonts by default. Simply style your text with a color font, and the styled text will be rendered in color. The following code example shows how to style a TextBlock with a color font that has been packaged with your app assets. (The same technique applies to regular fonts.)


<TextBlock FontFamily="Assets/MyColorFont.otf#MyFontFamilyName">Here is some text.</TextBlock>

Applying a color font to a XAML TextBlock

The FontFamily property points to the relative location of a font file that has been added to the app package. Since a single font file may include multiple font families, you also need to specify the desired font family using the hash syntax illustrated above.

If you never want your XAML text element to render multicolor text, set its IsColorFontEnabled property to false. For example, you may choose to have your app render monochromatic text when accessibility features are enabled.

Using color fonts in Microsoft Edge

As with XAML, Edge supports rendering color fonts by default in websites and web apps, including the XAML WebView control. Simply use HTML and CSS to style your text with a color font, and the styled text will be rendered in color.

Using color fonts in Direct2D

In contrast to the UI frameworks, the lower-level graphics APIs, such as Direct2D and DirectWrite, do not render color glyphs by default. This is to avoid unexpected behavior changes in text-rendering apps that were designed prior to color font support.

If your app renders text with Direct2D’s DrawText and DrawTextLayout APIs, you must “opt in” to color glyph rendering. To do so, pass the D2D1_DRAW_TEXT_OPTIONS_ENABLE_COLOR_FONT flag to the relevant drawing method. The following code example shows how to call Direct2D’s DrawText method to render a string in a color font:


// If m_textFormat points to a font with color glyphs, then the following 
// call will render m_string using the color glyphs available in that font.
// Any monochromatic glyphs will be filled with m_defaultFillBrush.
m_deviceContext->DrawText(
    m_string->Data(),
    m_string->Length(),
    m_textFormat.Get(),
    m_layoutRect,
    m_defaultFillBrush.Get(),
    D2D1_DRAW_TEXT_OPTIONS_ENABLE_COLOR_FONT
    ); 

Drawing multicolored text with Direct2D’s DrawText method

Using color fonts in Win2D

Like Direct2D, Win2D’s text drawing APIs do not render color glyphs by default.

To opt in to color glyph rendering with Win2D, set the EnableColorFont options flag in the text format object your app passes to the text drawing method. The following code example shows how to render a string in a color font using Win2D:


// The text format that will be used to draw the text. (Declared elsewhere
// and initialized elsewhere by the app to point to a color font.)
CanvasTextFormat m_textFormat;

// Set the EnableColorFont option.
m_textFormat.Options = CanvasDrawTextOptions.EnableColorFont;

// If m_textFormat points to a font with color glyphs, then the following 
// call will render m_string using the color glyphs available in that font.
// Any monochromatic glyphs will be filled with m_color.
drawingSession.DrawText(
    m_string,
    m_point,
    m_color,
    m_textFormat
    );

Drawing multicolored text with Win2D’s DrawText method

Building OpenType SVG color fonts

Color fonts are a relatively recent development in font technology, so support among font-building tools is still in its early stages. Not all types of color font are supported by all font tools, but support continues to improve as color fonts gain popularity.

Building a font from scratch is a complex process, and it’s more than we can cover in this blog post. But color fonts aren’t just for professional type designers—if you’re an app or web designer with a monochromatic icon font, and you’d like to upgrade it to a color font, we’ve developed a small tool to help make the process easier: the OpenType SVG Font Editor.

This app lets you take an existing font and add color by embedding your own SVG artwork for each glyph using a simple drag-and-drop interface. SVG is a popular vector art format supported by tools like Adobe Illustrator and Inkscape. On platforms that support OpenType SVG fonts (like Windows apps, Edge and Firefox), the color glyphs are rendered. Other platforms will automatically fall back to the monochromatic glyphs. For more information, please see the OpenType SVG Font Editor’s GitHub page.

The tool was developed by a group of Microsoft interns on the Windows graphics team. We found the tool useful, so we’ve made it available as an open-source project for others to use and improve.

Conclusion

Color fonts are an exciting new technology that unlocks richer text scenarios than were previously possible without sacrificing platform support for accessibility, fallback, scalability, printing and complex font capabilities. For more information, please see the following resources:

Introducing XAML Standard and .NET Standard 2.0

XAML Standard

We are pleased to announce XAML Standard, which is a standards-based effort to unify XAML dialects across XAML based technologies such as UWP and Xamarin.Forms.

XAML Standard is a specification that defines a standard XAML vocabulary. With that vocabulary, frameworks that support XAML Standard can share common XAML-based UI definitions. The goal is for the first version, XAML Standard 1.0, to be available later this year.

Post-specification plans include support of XAML standard in Xamarin.Forms and UWP. You can continue developing your UWP and Xamarin.Forms apps as you do today. When XAML Standard support is enabled, you will be able to reuse and share between the frameworks and expand into more platforms.

To visualize what this support would look like, here’s a side-by-side comparison between today’s XAML in Xamarin.Forms and in UWP:

In the above example – once XAML Standard is supported by Xamarin.Forms, you can use <TextBlock /> and have it supported in a Xamarin.Forms app targeting iOS and Android instead of needing to know and use <Label /> as shown above. In addition to a TextBlock, here are some of the currently proposed items for standardization.

We are at the beginning of a journey that makes it easy for you to reuse your XAML source files between some simple Xamarin.Forms and UWP views. For example – a Settings.xaml page, where you typically have some text, toggle switches and some buttons. You’d only need to design and create one XAML file to describe this UI and that can be used everywhere.

Nothing changes for existing developers – you can continue to use the same APIs you have always used in both frameworks. XAML Standard will help you reuse/share any common UI code that you wish to share between end points.

The XAML Standard v1 draft spec is being defined in the open, we encourage you to start a discussion or give us direct feedback in the GitHub repo here.

.NET Standard 2.0

We are also happy to announce .NET Standard 2.0! .NET Standard is the set of APIs which work in all .NET implementations. A good way to think about it is how HTML5 is today. There are many different browsers which implement HTML parsing and rendering, but the HTML standard is the common glue that holds the web together and allows for interoperability.

.NET Standard was introduced in June of 2016 to bring consistency to the .NET ecosystem. With the .NET Framework, Xamarin & Mono, .NET Core and then UWP, there were many different implementations of .NET and it was very difficult to write code or a library that could work across all of them. Using .NET Standard, and the tooling in Visual Studio, makes it possible to build libraries and NuGet packages that work literally everywhere .NET runs.

The feedback we received on after .NET Standard’s introduction last year was generally phrased as “We like the direction you’re going in, but the set of APIs isn’t large enough to be useful.” The goal with .NET Standard 2.0 is to respond to that feedback and add a large set of .NET’s “greatest hits” into .NET Standard. To do this, we looked at the intersection of APIs between the .NET Framework and Mono. This lead to the 2.0 definition including over 20,000 new APIs and those APIs enable the top 70% of existing NuGet packages to work.

With the addition of .NET Standard 2.0 support for UWP with the Fall Creators Update, .NET developers will be able to share code across all Windows 10 devices, in the cloud and through the rest of the .NET ecosystem. This will also make it easier to reuse existing WinForms and WPF code as many of the most frequently used APIs like DataSet/DataTable, and popularly requested APIs like SqlClient, are now part of .NET Standard 2.0.

You can learn more about .NET Standard here.

Forming the shape of the future

We invite you, the .NET community, to join the discussion and help shape the future of XAML Standard and .NET Standard. You can provide your feedback, both suggestions and issues, at the following locations; .NET Standard on Github and XAML Standard on Github.

Resources: