Tag Archives: User Interface

Animations with the Visual Layer

When the layout of your Universal Windows Platform (UWP) app changes, there is often a slight pause as your app rearranges content to fit the new window size or orientation. Composition APIs let you create smooth-as-butter animations between these states so your layout changes won’t jar your users. After all, layout changes are a basic fact of the app life cycle. It doesn’t mean we can’t handle them gracefully.

image1

Layout animations aren’t completely new, of course. UWP has always allowed you to hook up XAML transitions. These animations have tended to be, let’s just say, somewhat limited. By using implicit animations in the Visual Layer, however, you can have access to low-level animations that run at 60 frames a second in a separate process and all the freedom you need to create the layout animations you want.

Implicit versus Explicit animations

So what exactly is an implicit animation? This is best understood by contrasting it with explicit animations. Explicit animations are animations that you start running yourself in code-behind by calling a method like StartAnimation on a Visual.

For instance, here’s an example of a gear animation and the code used to get it started.

image2

_gearMotionScalarAnimation.Duration = TimeSpan.FromSeconds(secondsPerRotation); 
_gearVisuals.First().StartAnimation("RotationAngleInDegrees", _gearMotionScalarAnimation);

Implicit animations, on the other hand, are configured and then triggered by property changes. There’s a fire-and-forget quality to implicit animations. All you have to do is wire them up and they will just keep working for you, getting recreated and then cleaned up every time the trigger fires.

Here’s how you create and configure an implicit animation in four easy steps:

  1. Create an implicit animation collection.
  2. Create the animation or group of animations.
  3. Associate the animation value to a trigger key in the collection.
  4. Add the collection to your Visual’s ImplicitAnimations property.

The following code is used to create the fruit animations in the animated GIF at the top of this post:

// Create an implicit animation collection (just one member in this case) 
var implicitAnimationCollection = _compositor.CreateImplicitAnimationCollection(); 
 
// Create the animation 
var offsetKeyFrameAnimation = _compositor.CreateVector3KeyFrameAnimation(); 
offsetKeyFrameAnimation.Target = "Offset"; 
 
// Final Value signifies the target value to which the visual will animate 
// in this case it will be defined by new offset 
offsetKeyFrameAnimation.InsertExpressionKeyFrame(1.0f, "this.FinalValue"); 
offsetKeyFrameAnimation.Duration = TimeSpan.FromSeconds(3); 
 
// Associate the animation to the trigger 
implicitAnimationCollection["Offset"] = offsetKeyFrameAnimation; 
 
// Finally, add the implicit animation to individual Visuals 
foreach (var child in _root.Children) 
{ 
    child.ImplicitAnimations = implicitAnimationCollection; 
}

There are two things worth noting about this code:

  • First, only one trigger property is being used, Offset, so the ImplicitAnimationCollection has only one member.
  • Second, the animation uses the Offset property as a trigger, but also uses the Offset property for the animation itself. This is actually pretty common since you will usually want to animate the same property that sets off the transition – for instance, if an element’s scale or rotation changes you would want to create a transition animation between the start and end scale sizes or the start and end rotations.

For extra visual flair you could also decide to have a change in a Visual’s Offset trigger more than one animation – for instance, a position animation as well as a bit of spinning and pulsing. Go to the Windows UI Dev Labs GitHub repository for the full source code for this implicit animations sample.

Using implicit animations as layout animations

In order to apply implicit animations to your XAML layout, the only additional step you need to do, beyond what has been covered in the previous section, is to grab the backing Visuals for any UIElements you intend to animate.

image3

The Layout Animations demo from the Windows UI Dev Labs Sample Gallery on GitHub is basically a GridView control with several images in its Items collection.

<GridView x:Name="gridView" ContainerContentChanging="gridView_ContainerContentChanging"> 
    <GridView.Items> 
        <Image Source="1.jpg" Width="235" Height="200" Stretch="UniformToFill"/> 
        <Image Source="2.jpg" Stretch="UniformToFill"/> 
        <Image Source="3.jpg" Stretch="UniformToFill"/> 
        <Image Source="4.jpg" Stretch="UniformToFill"/> 
        . . . 
    </GridView.Items> 
</GridView>

Whenever the XAML page is resized, images inside the GridView are automatically shifted around by the underlying framework, altering the Offset property of each of the Visuals as it is moved from one position to another.
Here’s how you create and configure an implicit animation and add it to XAML elements in five easy steps:

  1. Create an implicit animation collection.
  2. Create the animation.
  3. Associate the animation value to a trigger key in the collection.
  4. In the ContainerContentChanging event handler, use the interop method GetElementVisual to peel of the backing Visual from a UIElement.
  5. Add the collection to your backing Visual’s ImplicitAnimations property.

The first three steps can all be done in the page’s constructor.

_compositor = ElementCompositionPreview.GetElementVisual(this).Compositor; 
 
// Create ImplicitAnimations Collection.  
_elementImplicitAnimation = _compositor.CreateImplicitAnimationCollection(); 
 
// Define trigger and animation that should play when the trigger is triggered.  
_elementImplicitAnimation["Offset"] = createOffsetAnimation();

In the Layout Animations demo, the final two steps are completed in the GridView’s ContainerContentChanging event handler.

private void gridView_ContainerContentChanging(ListViewBase sender, ContainerContentChangingEventArgs args) 
{ 
    var photo = args.ItemContainer; 
 
    var elementVisual = ElementCompositionPreview.GetElementVisual(photo); 
    elementVisual.ImplicitAnimations = _elementImplicitAnimation; 
}

It really is that simple to add fast, fluid and beautiful layout animations to your XAML page with the Visual layer.

Connected animations

For navigation transitions between pages, the Visual Layer provides a different mechanism known as connected animations to help you make your UI sweeter. Connected animations help the user stay oriented when she is performing common tasks such as context switching from a list of items to a details page.

image4

You can download the full source for this connected animations sample on GitHub.

Wrapping up

The visual power being made available to developers through the Composition APIs have basically always been a part of the UI Frameworks. They just haven’t been accessible until now. Think of this as a UI nuclear reactor being handed over to you to play with. With this awesome power, however, also comes the responsibility to create sweet UI and beautiful interactions. Go forth and be amazing.

To learn more about the topics covered in this post, you are encouraged to voraciously consume the following articles and videos:

Try out your animation skills – download Visual Studio and get designing!

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Smooth Interaction and Motion with the Visual Layer

The Composition APIs come with a robust animation engine that provides quick and fluid motion running in a separate process from your Universal Windows Platform (UWP) app. This provides a consistent 60 frames per second when running your app on an IoT device as well as on a screaming gaming machine. It is, quite simply, fast.

blogpost4-1

The Composition APIs also provide something you probably have never had access to before: the ability to create high-performing, low-level manipulation-driven custom animations like the one shown above.  Preview releases of the Composition APIs have already given you the ability to control animations with the XAML ScrollViewer’s ManipulationPropertySet. The Windows 10 Anniversary Update allows you to go even further, with the goal of empowering developers to use many types of input to drive animation.

This post will briefly cover the basics of expression animations. It will move on to a demonstration of how to drive expression animations from a ScrollViewer ManipulationPropertySet and, more importantly, why you really want to even if you don’t realize it yet. Finally, it will touch on the new InteractionTracker and give you an indication of just how much more powerful you are today as a developer than you were at the end of July.

A fast and fluid overview of expression animations

The Visual Layer supports both keyframe animations as well as expression animations. If you have worked with XAML animations before, then you are probably already familiar with how keyframes work. In a keyframe animation, you set values for some property you want to change over time and also assign the duration for the change: in the example below, a start value, a middle value, and then an ending value. The animation system will take care of tweening your animation – in other words, generating all the values between the ones you have explicitly specified.

ScalarKeyFrameAnimation blurAnimation = _compositor.CreateScalarKeyFrameAnimation();
blurAnimation.InsertKeyFrame(0.0f, 0.0f);
blurAnimation.InsertKeyFrame(0.5f, 100.0f);
blurAnimation.InsertKeyFrame(1.0f, 0.0f);
blurAnimation.Duration = TimeSpan.FromSeconds(4);
blurAnimation.IterationBehavior = AnimationIterationBehavior.Forever;
_brush.StartAnimation("Blur.BlurAmount", blurAnimation);

A keyframe animation is a fire-and-forget mechanism that is time based. There are situations, however, when you need your animations to be coordinated and actually driving each other instead of simply moving in synchronized fashion.

blogpost4-2

In the demo above (source code), each gray gear is animated based on the animation of the gear preceding it. If the preceding gear suddenly goes faster or reverses direction, it forces the following gear to do the same. A keyframe animations can’t create motion effects that work in this way, but expression animations can. They are able to do so because, while keyframe animations are time based, expression animations are reference based.

The critical code that hooks up the gears for animation in the demo is found in the following section that calls the CreateExpressionAnimation method on the Compositor instance. The expression basically says that the animation should reference and be driven by the RotationAngleInDegrees property of the Visual that is indicated by the parameter “previousGear”. In the next line, the reference parameter is assigned. In the final line, the current Visual’s RotationAngleInDegrees property is finally animated based on the value referred to in the ExpressionAnimation object.

private void ConfigureGearAnimation(Visual currentGear, Visual previousGear)
{
    // If rotation expression is null then create an expression of a gear rotating the opposite direction
    _rotationExpression = _rotationExpression ?? _compositor.CreateExpressionAnimation("-previousGear.RotationAngleInDegrees");

    // put in placeholder parameters
    _rotationExpression.SetReferenceParameter("previousGear", previousGear);

    // Start the animation based on the Rotation Angle in Degrees.
    currentGear.StartAnimation("RotationAngleInDegrees", _rotationExpression);
}

But if an animation can be driven by another animation, you may be wondering, couldn’t we also drive an animation with something more concrete like user input? Why, yes. Yes, we can.

The beauty of the ScrollViewer ManipulationPropertySet

Driving an animation from a ScrollViewer using XAML-Composition interop is fairly easy. With just a few lines of code, you can add an animation to a pre-existing ScrollViewer control by taking advantage of the GetScrollViewerManipulationPropertySet method on the ElementCompositionPreview class.

You would use this technique if you wanted to add a parallax effect to your XAML or to create a sticky header that stays in place as content scrolls beneath it. In the demo illustrated below (source code), a ScrollViewer is even used to drive a parallax effect on a ListView.

blogpost4-3

Adding parallax behavior to a XAML page can be accomplished in just a few lines, as the following sample code by James Clarke demonstrates.

CompositionPropertySet scrollerManipProps = ElementCompositionPreview.GetScrollViewerManipulationPropertySet(myScroller);

Compositor compositor = scrollerManipProps.Compositor;

// Create the expression
ExpressionAnimation expression = compositor.CreateExpressionAnimation("scroller.Translation.Y * parallaxFactor");

// wire the ParallaxMultiplier constant into the expression
expression.SetScalarParameter("parallaxFactor", 0.3f);

// set "dynamic" reference parameter that will be used to evaluate the current position of the scrollbar every frame
expression.SetReferenceParameter("scroller", scrollerManipProps);

// Get the background image and start animating it's offset using the expression
Visual backgroundVisual = ElementCompositionPreview.GetElementVisual(background);
backgroundVisual.StartAnimation("Offset.Y", expression);

The even more beautiful InteractionTracker

Driving expression animations with a ScrollViewer is extremely powerful, but what if you want to drive animations using touch gestures instead? What if you want to pull items toward you with your finger, as in the demo below (source code), or animate multiple flying images across and into the screen as happens in the demo at the top of this post (source code)?

blogpost4-4

In order to achieve these effects, you would use the new InteractionTracker and VisualInteractionSource classes. InteractionTracker is a state machine that can be driven by active input. This is what you hook up to your animations. The VisualInteractionSource class, on the other hand, determines what kind of input you will use to drive your InteractionTracker.

picture5

The following sample code demonstrates a basic implementation of an InteractionTracker that is driven by touch input. The viewportVisual is simply the backing Visual for the root element on the page. You use this as the VisualInteractionSource for the tracker. In doing so, you specify that you are tracking X and Y manipulations. You also indicate that you want to track inertial movement.

_tracker = InteractionTracker.Create(_compositor);

var interactionSource = VisualInteractionSource.Create(viewportVisual);

interactionSource.PositionXSourceMode = InteractionSourceMode.EnabledWithInertia;
interactionSource.PositionYSourceMode = InteractionSourceMode.EnabledWithInertia;

_tracker.InteractionSources.Add(interactionSource);

Hooking the tracker up to an expression animation works basically the same way as hooking up a gear Visual to another gear Visual, as you did earlier. You call the CreateExpressionAnimation factory method on the current Compositor and reference the Position property of the tracker. You then animate the Offset property of the Visual you want to add motion to your expression.

var positionExpression = _compositor.CreateExpressionAnimation("-tracker.Position");
positionExpression.SetReferenceParameter("tracker", _tracker);

contentVisual.StartAnimation("Offset", positionExpression);


Those are the basics of driving animation from almost any input. What you do with this amazing new power is entirely up to you.

Wrapping up

Expression animations and Visual Layer Interactions are both topics that can become very deep, very fast. To help you through these deeper waters, we highly recommend the following videos and articles:

Get started now – download Visual Studio.

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Creating Beautiful Effects for UWP

The Composition APIs allow you to enhance the appeal of your Universal Windows Platform (UWP) app with a wide range of beautiful and interesting visual effects. Because they are applied at a low level, these effects are highly efficient. They run at 60 frames per second, providing smooth visuals whether your app is running on an IoT device, a smartphone, or a high-end gaming PC.

Many visual effects implemented through the Composition APIs, such as blur, use the CompositionEffectBrush class in order to apply effects. Additional examples of Composition effects include 2D affine transforms, arithmetic composites, blends, color source, composite, contrast, exposure, grayscale, gamma transfer, hue rotate, invert, saturate, sepia, temperature and tint.  A few very special effects go beyond the core capabilities of the effect brush and use a slightly different programming model. In this post, we’re going to look at an effect brush-based effect, as well as two instances of visual flare that go beyond the basic brush: drop shadow and scene lighting.

Blur

The blur effect is one of the subtlest and most useful visual effects in your tool chest. While many visual effects are design to draw in the user’s attention, the blur’s purpose in user experience design is to do the opposite – basically saying to the user, “Move along, there’s nothing to see here.” By making portions of your UI a little fuzzier, you can direct the user’s attention toward other areas of the screen that they should pay more attention to instead. Aesthetically, blur has a secondary quality of transforming objects into abstract shapes that simply look beautiful on the screen.

flower blur effect

Until you get used to the programming model, implementing effects can seem a little daunting. For the most part though, all effects follow a few basic recipes. We’ll use the following in order to apply a blur effect:

  1. Prepare needed objects such as the compositor
  2. Describe your effect with Win2D
  3. Compile your effect
  4. Apply your effect to your image

The compositor is a factory object to create the classes you need to play in the Visual Layer. One of the easiest ways to get an instance of the compositor is to grab it from the backing visual for the current UWP page.

_compositor = ElementCompositionPreview.GetElementVisual(this).Compositor;

To use composition effects, you also need to include the Win2D.uwp NuGet package in your Visual Studio project. To promote consistency across UWP, the Composition effects pipeline was designed to reuse the effect description classes in Win2D rather than create a parallel set of classes.

win2d UWP nuget package

Once the prep work is done, you will need to describe your (Win2D) Gaussian blur effect. The following code is a simplified version of the source for the BlurPlayground reference app found in the Windows UI Dev Labs GitHub repository, should you want to dig deeper later.

var graphicsEffect = new GaussianBlurEffect()
{
    Name = "Blur",
    Source = new CompositionEffectSourceParameter("Backdrop"),
    BlurAmount = (float)BlurAmount.Value,
    BorderMode = EffectBorderMode.Hard,

};

In this code, you are basically creating a hook into the definition with the Backdrop parameter. We’ll come back to this later. Though it isn’t obvious from the code, you are also initializing the BlurAmount property – which determines how blurry the effect is – of the GaussianBlurEffect to the value property of a Slider control with the name BlurAmount. This isn’t really binding, but rather simply setting a starting value.

After you’ve defined your effect, you need to compile it like this:

var blurEffectFactory = _compositor.CreateEffectFactory(graphicsEffect,
    new[] { "Blur.BlurAmount" });

_brush = blurEffectFactory.CreateBrush();


Compiling your blur effect probably seems like an odd notion here. There’s basically a lot of magic being done behind the scenes on your behalf. For one thing, the blur effect is being run out of process on a thread that has nothing to do with your app. Even if your app hangs, the effect you compile is going to keep running on that other thread at 60 frames per second.

This is also why there are a lot of apparently magic strings in your Composition effect code; things are being hooked up at the system level for you. “Blur.BlurAmount” lets the compiler know that you want to keep that property of the “Blur” object accessible in case you need to change its value later. The following sample handler for the Slider control will dynamically reset the blur amount on your compiled effect, allowing your users to change the blur by simply moving the slider back and forth.

private void BlurAmount_ValueChanged(object sender, RangeBaseValueChangedEventArgs e)
{
    // Get slider value
    var blur_amount = (float)e.NewValue;

    // Set new BlurAmount
    _brush.Properties.InsertScalar("Blur.BlurAmount", blur_amount);
}


The last step in implementing the blur is to apply it to an image. In this sample code, the Image control hosts a picture of a red flower and is named “BackgroundImage.”

<Image x:Name="BackgroundImage" 
        Grid.Column="1" 
        SizeChanged="BackgroundImage_SizeChanged"
        Margin="10"/>


To apply the blur to your image control you need to be able to apply your compiled blur to a SpriteVisual, which is a special Composition class that can actually be rendered in your display. To do that, in turn, you have to create a CompositionBackdropBrush instance, which is a class whose main purpose is to let you apply an effect brush to a sprite visual.

var destinationBrush = _compositor.CreateBackdropBrush();
_brush.SetSourceParameter("Backdrop", destinationBrush);

var blurSprite = _compositor.CreateSpriteVisual();
blurSprite.Brush = _brush;

ElementCompositionPreview.SetElementChildVisual(BackgroundImage, blurSprite);

Once everything is hooked up the way you want and the blur has been applied to a new sprite visual, you call the SetElementChildVisual method to insert the sprite visual into the BackgroundImage control’s visual tree. Because it is the last element in the tree, it gets placed, visually, on top of everything else. And voila, you have a blur effect.

flower blur effect

Effect brushes can also be animated over time by using the Composition animation system in concert with the effects pipeline. The animation system supports keyframe and expression animations, of which keyframe is generally better known. In a keyframe animation, you typically set some property values you want to change over time and set the duration for the change: in this case a start value, a middle value and then an ending value. The animation system will take care of tweening your animation – in other words, generating all the values between the ones you have explicitly specified.

ScalarKeyFrameAnimation blurAnimation = _compositor.CreateScalarKeyFrameAnimation();
blurAnimation.InsertKeyFrame(0.0f, 0.0f);
blurAnimation.InsertKeyFrame(0.5f, 100.0f);
blurAnimation.InsertKeyFrame(1.0f, 0.0f);
blurAnimation.Duration = TimeSpan.FromSeconds(4);
blurAnimation.IterationBehavior = AnimationIterationBehavior.Forever;
_brush.StartAnimation("Blur.BlurAmount", blurAnimation);

You’ll notice that when it comes time to apply the animation to a property, you once again need to refer to the magic string “Blur.BlurAmount” in order to access that property since it is running in a different process.

By animating your effects and chaining them together with other effects, you can really start to unlock the massive power behind the Composition effects pipeline to create beautiful transitions like the one below.

sand dune blur animation

Here the blur effect is combined with a scaling animation and an opacity animation in order to effortlessly draw the user’s attention to the most significant information. In addition, effective and restrained use of effects and animations, as in this sample, creates a sense of pleasure and surprise as you use the app.

Drop Shadow

A drop shadow is a common and effective way to draw attention to a screen element by making it appear to pop off of the screen.

drop shadow effect example

The simplest (and probably most helpful) way to show how to implement a drop shadow is to apply one to a SpriteVisual you create yourself, in this case a red square with a blue shadow.

color shadow example

You create the actual drop shadow by calling the CreateDropShadow method on the compositor instance. You set the amount of offset you want, as well as the color of the shadow, then attach it to the main element. Finally, you add the SpriteVisual “myVisual” to the current page so it can be rendered.

_compositor = ElementCompositionPreview.GetElementVisual(this).Compositor;

// create a red sprite visual
var myVisual = _compositor.CreateSpriteVisual();
myVisual.Brush = _compositor.CreateColorBrush(Colors.Red);
myVisual.Size = new System.Numerics.Vector2(100, 100);

// create a blue drop shadow
var shadow = _compositor.CreateDropShadow();
shadow.Offset = new System.Numerics.Vector3(30, 30, 0);
shadow.Color = Colors.Blue;
myVisual.Shadow = shadow;

// render on page
ElementCompositionPreview.SetElementChildVisual(this, myVisual);


A shadow effect can be attached to basic shapes, as well as to images and text.

blur example

The Windows UI Dev Labs sample gallery even has an interop XAML control called CompositionShadow that does most of the work for you.

<common:CompositionShadow OffsetY="5" OffsetX="2"
        Width="200" Height="200"
        HorizontalAlignment="Left"
        VerticalAlignment="Top"
        Margin="10">
    <Image Source="ms-appx:///Assets/Ninjacat-3.png"
            HorizontalAlignment="Left"
            VerticalAlignment="Top"/>
</common:CompositionShadow>


Scene Lighting

One of the coolest effects to come with the Composition APIs is the new scene lighting effect. In the animated gif below, this effect is applied to a collection of images being displayed in a ListView.

scene lighting animation

While there isn’t space to go into the detailed implementation here, a very general recipe for creating this effect looks like this:

  1. Create various lights and place them in coordinate space.
  2. Identify objects to be lit by targeting the lights at the root visual or any other visuals in the visual tree.
  3. Use SceneLightingEffect in the EffectBrush to customize displaying the SpriteVisual.

Wrapping up

As mentioned, the best place to go in order to learn more about these and other effects is the repository for the Windows UI Dev Labs samples. You should also view these two short but excellent videos on the topic:

Get started on your own app with Visual Studio.

Designing for Intuitive Navigation

In the past several posts, we have covered user interface (UI) design concepts such as typography, visual cues, and design thinking. In today’s blog, however, we will look at a topic that has much less to do with solving UI design problems and much more to do with solving user experience (UX) design problems—that is, navigation.

Navigation involves structuring your app in a way that promotes ease and efficiency, not merely design aesthetics. Unlike the other topics covered so far, which rely in part on personal taste as well as an awareness of changing fashions, navigation—and UX in general—tends to rely heavily on usability research.

In this post, we will explore the three key principles that both research and experience have shown underpin good navigation design:

  • Consistency — meet user expectations
  • Simplicity — don’t do more than you need to
  • Clean interaction — keep out of your users’ way

Keeping these principles in mind as you design will help you to achieve the ideal navigational structure—that is, one that the user never notices.

Consistency

Navigation should be consistent with user expectations, leaning on standard conventions for icons, location and styling.

For example, in Figure 1 below, you can see the spots where users will typically expect to find functionality like the menu and back buttons. The user expectation that the menu button will be at the top-left of the screen is so strong that you can even consider using a non-traditional icon to represent it, although the traditional “hamburger” icon is often the preferred choice across most platforms. For the back button, it is better to stick with Windows convention and keep it in either the leftmost spot or, if there is a Windows button, in the second-to-the-left spot.

IMAGE1

Figure 1. Users expect to find certain buttons in certain places—e.g. the menu in the top left, and for UWP apps, the back button in the leftmost or second-to-the-leftmost spot. Sticking to these standard conventions helps users interpret the meaning of the buttons.

Placement of navigational elements should also change for different device families. For example, on tablets and laptops/PCs, the navigation pane is usually placed on the left side, whereas on mobile, it is on the top.

IMAGE2

Figure 2. Different device families have their own conventions for navigational elements. For example, the navigation pane typically appears on the left side of the screen for tablets, but up top for mobile devices.

Simplicity

Another important factor in navigation design is the Hick-Hyman Law, often cited in relation to navigational options. This law encourages us to add fewer options to the menu (see Figure 3 below). The more options there are, the slower user interactions with them will be, particularly when users are exploring a new app. The same law can be applied to media content. Rather than overwhelming the user with a vast selection of media options, consider providing brief tidbits for users to explore if they choose.

IMAGE3

Figure 3. On the left, notice there are fewer options for the user to select, whereas on the right, there are several. The Hick-Hyman Law indicates that the menu on the left will be easier for users to understand and utilize.

Clean interaction

The final key characteristic of navigation is clean interaction, which refers to the physical way that users interact with navigation across a variety of contexts.

This is one area where really putting yourself in the user’s position will inform your design. Try to understand your users and their behaviors. If you’re designing a cooking app, and you’re expecting it to be used in the kitchen, you might want to take into account that the user will probably want to avoid using food-covered fingertips to navigate to the next cooking step. Instead, the user might use a knuckle, the back of his or her hand, or even an elbow. This should influence the size of your touch targets and the appropriate spacing between navigational elements, at the very least.

You should also keep in mind which areas of the screen are considered easy to reach. These are known as interaction areas. In the mobile device illustration below (Figure 4), for example, the blue area represents the optimal touch points for users (in this case, a user with the phone held in her left hand). Here, users expend the least amount of effort to interact —remember that most users hold their phones with their left hands and interact with their thumbs. Correspondingly, the dark grey region requires somewhat greater effort for interaction than the blue, and the light gray area requires the greatest amount of effort overall.

IMAGE4

Figure 4. Mobile device interaction area.

Tablet devices introduce an additional complexity because users have multiple ways of holding their device. Typically, users grip a tablet with both hands along the sides. Figure 5 below shows the interaction area for the most common pose and grip of a tablet. Keep in mind as you design your navigation, however, that tablet users often switch between posing their devices in landscape and portrait orientations. Finally, note the alternative ways you yourself interact with tablets and consider whether your navigation is convenient for those scenarios, as well.

IMAGE5

Figure 5. Tablet device interaction area.

Rules of Thumb

Several rules of thumb help designers to encapsulate consistency, simplicity and clean interaction in their navigation design. Most of these come from the web design world and have been around for nearly a decade. These rules happen to work well for UWP apps, but as with any rules of thumb, use them as starting points and tweak as needed.

  1. Avoid deep navigational hierarchies. How many levels of navigation are best for your users? A top-level navigation and one level beneath it is usually plenty. If you go beyond three levels of navigation, then you break the principle of simplicity. Even worse, you risk stranding your user in a deep hierarchy that they will have difficulty leaving.
  1. Avoid too many navigational options. Three to six navigation elements per level seem about right. If your navigation needs more than this, especially at the top level of your hierarchy, then you might consider splitting your app into multiple apps, since you may be trying to do too much in one place. Too many navigation elements usually lead to inconsistent and unrelated objectives in your app.
  1. Avoid pogo-sticking. Pogo-sticking occurs when there is related content, but navigating to it requires the user to go up a level and then down again. Pogo-sticking violates the principle of clean interaction by requiring unnecessary clicks or interactions to achieve an obvious goal—in this case, looking at related content in a series. (The exception to this rule is in search and browse, where pogo-sticking may be the only way to provide the diversity and depth required.)
IMAGE6

Figure 6. Pogo-sticking to navigate through an app—the user has to go back (green back arrow) to the main page in order to navigate to the “Projects” tab.

Having an icon (swipe gesture in green) helps to resolve this issue, as you can see in Figure 7.

IMAGE7

Figure 7. You can resolve some pogo-sticking issues with an icon (note the swipe gesture in green).

  1. Mind your thumbs. Your users will typically use their thumbs to navigate your app. At the same time, they are also using their thumbs to grip their device, leading to a bit of clumsiness in interaction. To throw an extra wrench into this, thumbs are relatively big compared to other fingers and tend to hide the visual element with which a user is trying to interact. The technical term for this occurrence is occlusion. Keep occlusion in mind as you create your navigation structure.

Wrapping up

The goal of navigation design is to help your user move through your app without noticing your navigation structure at all. This is accomplished by making your navigation design simple and clean, typically through the reuse of the navigation idioms that everyone else uses whenever you can. By making your navigation uninteresting and consistent with standard conventions, you are actually helping your users to navigate your app intuitively.

For more information, refer to Navigation design basics for UWP apps and Navigation in UWP apps.