Apple Macbook Pro Retina Mid 2014 8GB/256GB – £690 del / 660 collected

Hi all,

Looking at switching back to Windows for a company laptop so my MBP is up for sale.

Its a Mid 2014, bought by me in September 2014 (remainder of the 3 years warranty left)

Its the 256GB SSD and 8GB RAM model. Comes boxed with charger (not sure if I have the power extension cable, I’ll have to look around). Condition is good – no major marks or dents that I can see, battery life is great.

Don’t think theres any more to say… its…

Apple Macbook Pro Retina Mid 2014 8GB/256GB – £690 del / 660 collected

Announcing Windows 10 Mobile Insider Preview Build 14327

Hi everyone,

Today we are releasing Windows 10 Mobile Insider Preview Build 14327 to Windows Insiders in the Fast ring.

Here’s what’s new in Build 14327 for Mobile

Try out the Messaging Everywhere Preview

You can now try out the preview of the “Messaging everywhere” feature in Windows 10 that allows you to send and receive text messages from your phone directly from your Windows 10 PC’s.

Messaging everywhere UI

To enable this:

  • Make sure you are signed in with your Microsoft Account (MSA).
  • On your phone – make sure “Send texts on all my Windows devices” is turned on.
  • On your PC – make sure you have chosen which phone to send messages through. You do this in the settings of the Messaging app on your PC (screenshot below).

Messaging everywhere (Beta) settings on MobileMessaging everywhere (Beta) settings on PC

Try it out and less us know what you think via the Feedback Hub.

NOTE: You may have noticed that Skype integration in the Messaging app in this build of Windows 10 Mobile has been removed. This is so Skype can streamline your experience, replacing the integration with the Skype UWP Preview app for mobile in an upcoming build. In the meantime, you can use the existing Skype for Windows Phone app available in the Windows Store.

Cortana in more languages

With this build, we are enabling Cortana for the Spanish (Mexico), Portuguese (Brazil) and French (Canada) languages for Mobile (available previously in PC builds). If you’re running the Windows 10 Mobile Insider Preview in these languages – give Cortana a try and let us know what you think. For each new market and language, the Cortana Team works to develop a custom experience that is relevant in each individual market and language. These are early versions that we will continue improve based on your feedback and we look forward to hearing more from you.

Voice input is also now available for these languages. Set focus to a text field and tap the microphone above the keyboard to give it a try.

Here’s what’s fixed for Mobile

  • We fixed the issue causing Language and Speech packs to fail to download.
  • We fixed the issue where in some cases your phone’s screen may not turn on and become unresponsive and Windows Hello stops working if you used the power button to lock/unlock your phone quickly.
  • We fixed the issue where in some cases, users might get in a state where neither space or enter are working on the keyboard.
  • We fixed the issue causing Facebook Messenger and other apps like WeChat, Transfer My Data, and UC Browser from failing to launch from Start or All apps.
  • We fixed an issue where diverse emoji couldn’t be deleted from the text box in an interactive notification.
  • We have updated the autocorrection logic so that if you have a word that’s about to be autocorrected, tapping the word you just typed will now stop the autocorrection from happening.
  • We have updated the Glance screen so it will now reflect Ease of Access text scaling.
  • We fixed an issue where tethering over Bluetooth wouldn’t work if Bluetooth had never been turned on before.
  • We fixed an issue where you couldn’t set a sample image as a lock screen background.

Known issues for Mobile

  • UPDATE: We are investigating issues with downloading Language and Speech packs being reported by Windows Insiders.
  • Feedback Hub is not localized and the UI will be in English (U.S.) only even with language packs installed.
  • We’re investigating a crash with the Camera app when going into your camera roll.
  • There is an issue in which you may see duplicate apps under All apps showing as pending despite being installed and usable on your phone. You may also see some apps stuck in the Store. To get out of this state, just start and pause a download in the Store and then choose to “resume all” downloads.
  • You may see square boxes in certain apps when using some of the new emoji – we’re still getting support for the new emoji added throughout the systems, this will be resolved in a future build.
  • There is a bug we’re investigating that prevents some apps such as Tweetium from launching.
  • We’re investigating issues which cause mobile data to stop working but show as connected.
  • Glance on/off setting is not respected after updating to a new build. After updating, you can reset this setting to what you had before.

As a reminder – we release builds for Mobile from our Development Branch to the list of devices that will be capable of receiving updates as part of the Windows Insider Program. As we stated previously, only devices which are eligible to receive the Windows 10 Mobile upgrade will be able to get preview builds from the Development Branch going forward.

As always – thank you for being Windows Insiders and make sure to send us feedback on any issues you run into with these builds in the Feedback Hub.

Thanks,
g

Building a more accessible web platform

In February we shared our roadmap for empowering all Microsoft Edge customers through accessibility and inclusive design. Today, we’re excited to share more about Microsoft Edge’s native support for the modern UI Automation accessibility framework, coming with the Windows 10 Anniversary Update. UI Automation enables Windows applications to provide programmatic information about their user interface to assistive technology products such as screen readers, and enables a comprehensive ecosystem.

New Microsoft Edge accessibility architecture

Providing a great experience for users of all abilities requires the right architecture. Over the last year we’ve rethought our approach, and designed an accessibility system which inherently supports modern web standards including HTML5 and CSS3. Let’s take a look at the simplified browser pipeline that follows the webpage content into an accessible presentation layer:

Flowchart showing the simplified browser pipeline. Figure 1. Content transformed to the engine model is projected into visual and accessibility views that are presented either as visual or accessible presentation.

Content transformed to the engine model is projected into visual and accessibility views that are presented either as visual or accessible presentation.

In our previous architecture, accessibility objects were generated and tightly associated with the DOM tree, in fact it was a subset of the tree itself. This presented many engineering challenges to maintain and evolve.

With Microsoft Edge an accessibility view is now generated from the same engine model used for visual presentation. Our accessible view creates native accessible objects which can be mutated and invalidated with script changes, just like visual view elements.

Our new accessibility architecture is available in the latest public flight behind the “Enable experimental accessibility features” flag under about:flags, and assistive technologies can take advantage of this using the “MicrosoftEdge” UIA_FrameworkIdPropertyId.

Try it today and let us know what you think!

Navigating web content faster with document structure and landmark roles

To transform content, styles and attributes into accessible objects browsers use HTML and Core Accessibility API mapping specifications. These define how semantic HTML elements gain ARIA roles and are transformed into accessibility entities, such as UI Automation objects.

Using semantic elements, developers can greatly improve the content navigation experience. Microsoft Edge and Narrator now support document structure and landmark role elements in heading, paragraph and landmark navigation modes.

For example, using the <main> elementdevelopers can provide a hint to assistive technology indicating where the primary content is. This makes it easier for their users to get to this content quickly from anywhere on the page. This experience will light up in Microsoft Edge and Narrator, and in other browsers, with assistive technologies supporting those roles.

Getting more information with accessible names and descriptions

Site authors commonly use ARIA attributes and well known patterns and techniques to enhance presentation of the elements on the page. Browsers then take the available information on that element, including contextual elements nearby, and compute accessible name and description. Microsoft Edge now supports the computation algorithm, providing users access to the full information available in the markup.

Better user experience with forms, controls and new semantic elements

Form data entry is a core scenario on the web, and based on user feedback we’ve greatly improved the form entry experience including:

  • Improved accessibility of the error validity states
  • Made <datalist> element accessibile
  • Improved accessibility of all the controls that support lists, including <select>
  • Improved keyboard experience for input types, including Up and Down keys to change the input type=number value
  • Implemented input type=color, which stores the value in computer readable form, and which now presents the information in human readable percentages of Red, Green and Blue.

Code: <input type="color" value="#ff0000" />

Result: UIA Value: “100% Red 0% Green 0% Blue”

Improved web legibility in high contrast

Microsoft Edge provides full support for the Windows high contrast themes, including custom user themes. Many modern web sites use background images as foreground content, which doesn’t provide a great experience for users who need increased contrast.

Screen capture showing xbox.com in IE's high contrast mode. The main foreground content is entirely blacked out with only a heading and secondary content elements visible.

Xbox.com in the legacy high contrast mode.

After working closely with visually impaired users and prototyping different solutions, we decided to try something different for high contrast mode in this release. Microsoft Edge will no longer remove background images, and instead will render an opaque layer behind text to improve legibility while maintaining the site design.

These improvements speak for themselves:

Screen capture showing Xbox.com in the updated high contrast mode. Only the area immediately surrounding each text block is blacked out, allowing the page structure and imagery to show through without impairing readability.

Xbox.com in the redesigned high contrast mode.

Along with the changes we have retained full developer control over browser implicit enhancements with -ms-high-contrast media feature and -ms-high-contrast-adjust property.

These high contrast improvements are available in current Windows Insider releases, enabled by default. Users can disable the “Render backgrounds in high contrast” feature under about:flags to get the previous behavior.

More improvements and platform features

Alongside these platform, user experience, and legibility improvements, we’ve had a chance to add a few user experience features and fixes:

  • Implemented the Web Speech Synthesis API
  • Improved caret browsing on Windows Phone with external keyboards
  • Improved Microsoft Edge frame-to-content focus transitions

Journey towards a more accessible web platform

As we finalize these features on the road to the Windows 10 Anniversary Update, we’ve also begun work on a few additional areas targeting a future release. Below are some of the features we’re currently considering:

  • Improving the UIA TextPattern reading experience based on user feedback
  • Enabling the web platform to take into account the “Text size” slider, available in Ease of access > More options on Windows 10 Mobile
  • Improvements to input controls and scrollbars in high contrast mode
  • Improvements to the text contrast ratios of controls
  • Improvements to touch and keyboard experience with Narrator on the web for desktop and mobile
  • Tweaks to the focus outlines to make finding and tracking elements in the browser easier

In upcoming posts we’ll have more to share about our user experience improvements as measured by HTML5Accessibility.com, as well as our approach to automate accessibility testing for the web.

We’re excited to share the current improvements and look forward to building an even more accessible web with you! As always, we welcome your feedback.

– Bogdan Brinza, Program Management Lead, Microsoft Edge
– Rossen Atanassov, Software Development Lead, Microsoft Edge

Adding Cortana and Bluetooth to a UWP app for Phillips Hue

Smart devices are becoming increasingly popular. One great device is the Phillips Hue lightbulb – a colorful, internet-connected light you can control with your phone from the comfort of your couch. No longer do you have to get up to toggle a light switch. There are already a number of Windows apps that you can download to interact with the Hue lights, but we wanted to create a sample that demonstrated how a Universal Windows Platform (UWP) app could interact with them in unique ways.

The full source for our sample is available on GitHub here. Feel free to take a look and, if you have some Hue lights, play around.

The basics of the app are simple: find the bridge on your network and talk to it using Hue’s RESTful interface. We created a small library to simplify this part, which lets you do cool things like turn on all your lights with just a few lines of code:



Bridge bridge = await Bridge.FindAsync();
IEnumerable&lt;Light&gt; lights = await bridge.GetLightsAsync(); 
foreach (Light light in lights)
{
	light.State.On = true; 
}


That’s the easy part. With that foundation, we built up the app to explore a few other key scenarios. For this release, we focused on:

  • Cortana
  • Bluetooth LE
  • Extended splash screen

In this post, we’ll talk about the basic steps we took to integrate these three features into our app sample, and point you towards relevant code in the sample that shows how to integrate them into your own app. To see Cortana and the Hue sample in action, check out this video on Channel 9:

Cortana

Cortana provides a way for users to naturally interact with their PCs and phones. They accomplish key tasks such as launching apps, searching the web, and getting a joke of the day. Yet that’s only part of the story. The true value of Cortana is seen when you extend the basic functionality to integrate with your UWP apps. By adding Cortana support to an app, you can create really cool experiences for your users – such as turning on Hue lights or changing their color just with voice.

The first step to integrate Cortana with your app is to create a voice command definition (VCD) file. This file defines the structure of voice commands that Cortana should recognize as related to your app. For the Hue sample, these are commands like changing the color of a specific light or turning a light on or off.



    &lt;Command Name=&quot;changeLightsState&quot;&gt;
      &lt;Example&gt;Turn the lights on&lt;/Example&gt;
      &lt;ListenFor&gt;[Turn] [the] lights {state}&lt;/ListenFor&gt;
      &lt;Feedback&gt;Trying to turn the lights {state} &lt;/Feedback&gt;
      &lt;VoiceCommandService Target=&quot;LightControllerVoiceCommandService&quot; /&gt;
    &lt;/Command&gt;

    &lt;Command Name=&quot;changeLightsColor&quot;&gt;
      &lt;Example&gt;Change the lights color&lt;/Example&gt;
      &lt;ListenFor&gt;Change [the] [lights] color&lt;/ListenFor&gt;
      &lt;VoiceCommandService Target=&quot;LightControllerVoiceCommandService&quot; /&gt;
    &lt;/Command&gt;


You’ll notice that a command specifies a name as well as the background task that should be used when it’s detected. That allows Cortana to route the command to your app and for you to handle it appropriately. You can learn more about the XML schema on MSDN in the article Voice Command Definition (VCD) elements and attributes and see our full VCD file here on GitHub,  but at a high level the schema is made up of VoiceCommands and PhraseLists.

If you look at our full VCD file for the Hue sample, you’ll notice that while all the commands you’d expect are listed, the PhraseLists aren’t quite complete. The “name” list, for example, is empty and no colors are specified. That’s because we wanted to dynamically create those lists, ensuring that the names displayed in our UI matched the names of the lights enumerated on the network and that the full set of system colors was available for specification. To accomplish that, while our extended splash screen was showing, we dynamically modified the VCD file before registering it with the OS as seen in the InitalizeCortanaAsync() method below.



/// &lt;summary&gt;
/// Prepares Cortana for background use. 
/// &lt;/summary&gt;
private async Task InitalizeCortanaAsync()
{
	// You can't write to application files by default, so we need to create a 
	// secondary VCD file to dynamically write Cortana commands to.
	StorageFile dynamicFile = await ApplicationData.Current.RoamingFolder.CreateFileAsync(
		&quot;VoiceCommands.xml&quot;, CreationCollisionOption.ReplaceExisting);

	// Load the base file and parse the PhraseList we want from it.
	StorageFile baseFile = await StorageFile.GetFileFromApplicationUriAsync(
		new Uri(&quot;ms-appx:///VoiceCommands.xml&quot;));
	XDocument xml = XDocument.Load(baseFile.Path);
	XElement state = xml.Descendants().First(x =&gt; x.Name.LocalName == &quot;PhraseList&quot; &amp;&amp; null != x.Attribute(&quot;Label&quot;) &amp;&amp; x.Attribute(&quot;Label&quot;).Value == &quot;state&quot;);

	// A ColorMapping is a RGB and HSV compliant representation a system color.
	// ColorMapping.CreateAll() returns a ColorMapping for all system colors available to UWP apps.
	// For each ColorMapping, add it to the list of phrases Cortana knows.
	foreach (HsbColor color in HsbColor.CreateAll())
	{
		state.Add(new XElement(&quot;Item&quot;, color.Name));
	}

	// Add the light names.
	XElement names = xml.Descendants().First(x =&gt; x.Name.LocalName == &quot;PhraseList&quot; &amp;&amp; null != x.Attribute(&quot;Label&quot;) &amp;&amp; x.Attribute(&quot;Label&quot;).Value == &quot;name&quot;);
	foreach (Light light in _lights)
	{
		names.Add(new XElement(&quot;Item&quot;, light.Name));
	}

	// Save the file, and then load so Cortana recognizes it.
	using (Stream stream = await dynamicFile.OpenStreamForWriteAsync())
	{
		xml.Save(stream);
	}
	Await VoiceCommandDefinitionManager.InstallCommandDefinitionsFromStorageFileAsync(dynamicFile);
}


Now that the commands and phrases are defined and registered, all you need to do to finish integrating Cortana support is to create a background task to handle the commands. There are some complexities here if you want to handle the disambiguation of responses or query the user for additional information, but it’s all generally straightforward – parse the user’s speech, then send the appropriate command to the Hue lights.

The following code shows the Run method of the background task and how it determines the Cortana command that was spoken by the user.



        /// &lt;summary&gt;
        /// Entry point for the background task.
        /// &lt;/summary&gt;
        public async void Run(IBackgroundTaskInstance taskInstance)
        {
            var triggerDetails = taskInstance.TriggerDetails as AppServiceTriggerDetails;
            if (null != triggerDetails &amp;&amp; triggerDetails.Name == &quot;LightControllerVoiceCommandService&quot;)
            {
                BackgroundTaskDeferral _deferral = taskInstance.GetDeferral();
                taskInstance.Canceled += (s, e) =&gt; _deferral.Complete();
                if (true != await InitalizeAsync(triggerDetails))
                {
                    return;
                }
                // These command phrases are coded in the VoiceCommands.xml file.
                switch (_voiceCommand.CommandName)
                {
                    case &quot;changeLightsState&quot;: await ChangeLightStateAsync(); break;
                    case &quot;changeLightsColor&quot;: await SelectColorAsync(); break;
                    case &quot;changeLightStateByName&quot;: await ChangeSpecificLightStateAsync(); break;
                    default: await _voiceServiceConnection.RequestAppLaunchAsync(
                        CreateCortanaResponse(&quot;Launching HueLightController&quot;)); break;
                }
                // keep alive for 1 second to ensure all HTTP requests sent.
                await Task.Delay(1000);
                _deferral.Complete();
            }
        }


Notice that a BackgroundTaskDeferral is created, since our background task for handling Cortana commands runs asynchronous methods. Without the deferral, it is possible for the Run method to return before the background task has completed its work, which might result in the suspension or termination of the background task host process and prevent the completion of any asynchronous operations started by the background task. For more details about deferrals, see the IBackgroundTaskInstance.GetDeferral method documentation on MSDN.

The majority of the commands we implemented were rather simple, but we did choose to have one command that required Cortana to prompt the user for additional information. When a user asks Cortana to change the lights’ color, but didn’t specify one, we wanted Cortana to suggest some colors. That was accomplished through a request for disambiguation as shown in the following code.



        /// &lt;summary&gt;
        /// Handles an interaction with Cortana where the user selects 
        /// from randomly chosen colors to change the lights to.
        /// &lt;/summary&gt;
        private async Task SelectColorAsync()
        {
            var userPrompt = new VoiceCommandUserMessage();
            userPrompt.DisplayMessage = userPrompt.SpokenMessage = 
                &quot;Here's some colors you can choose from.&quot;;

            var userReprompt = new VoiceCommandUserMessage();
            userReprompt.DisplayMessage = userReprompt.SpokenMessage = 
                &quot;Sorry, didn't catch that. What color would you like to use?&quot;;

            // Randomly select 6 colors for Cortana to show
            var random = new Random();
            var colorContentTiles = _colors.Select(x =&gt; new VoiceCommandContentTile
            {
                ContentTileType = VoiceCommandContentTileType.TitleOnly,
                Title = x.Value.Name
            }).OrderBy(x =&gt; random.Next()).Take(6);

            var colorResponse = VoiceCommandResponse.CreateResponseForPrompt(
                userPrompt, userReprompt, colorContentTiles);
            var disambiguationResult = await 
                _voiceServiceConnection.RequestDisambiguationAsync(colorResponse);
            if (null != disambiguationResult)
            {
                var selectedColor = disambiguationResult.SelectedItem.Title;
                foreach (Light light in _lights)
                {
                    await ExecutePhrase(light, selectedColor);
                    await Task.Delay(500);
                }
                var response = CreateCortanaResponse($&quot;Turned your lights {selectedColor}.&quot;);
                await _voiceServiceConnection.ReportSuccessAsync(response);
            }
        }


And really that’s all there is to it. Those are the basics of implementing Cortana support in an app.

Bluetooth LE

Manipulating the lights with UI controls is a vast improvement over physical switches, but we also want to explore ways to control the lights using proximity – when the user comes within reasonable range of the lights with their phone, the lights should automatically turn on. When they leave, the lights turn off. To achieve this effect, we decided to use Bluetooth Low Energy (LE) because it’s power-friendly and can easily run in the background.

Bluetooth LE is based around publisher and watcher objects. A publisher constantly broadcasts signals for nearby listening watchers to receive. A watcher, on the other hand, listens for nearby publishers and fires events when it receives a Bluetooth LE advertisement so that the app can react accordingly. In our case, we presume there’s a device acting as a publisher in close proximity to the Hue bridge. The app (running on a phone in the user’s pocket) assumes the role of the watcher.

Seem interesting? To get a Bluetooth watcher set up, first you’ll need to register a background task that listens for a BluetoothLEAdvertisementWatcherTrigger.



private IBackgroundTaskRegistration _taskRegistration;
private BluetoothLEAdvertisementWatcherTrigger _trigger;
private const string _taskName = &quot;HueBackgroundTask&quot;;
private async Task EnableWatcherAsync()
{
	_trigger = new BluetoothLEAdvertisementWatcherTrigger();

	BackgroundAccessStatus backgroundAccessStatus = 
		await BackgroundExecutionManager.RequestAccessAsync();
	var builder = new BackgroundTaskBuilder()
	{
		Name = _taskName,
		TaskEntryPoint = &quot;BackgroundTasks.AdvertisementWatcherTask&quot;
	};
	builder.SetTrigger(_trigger);
	builder.AddCondition(new SystemCondition(SystemConditionType.InternetAvailable));
	_taskRegistration = builder.Register();
}


You can configure this trigger to fire only when an advertisement with specific publisher and signal strength information is received; that way, you don’t have to worry about handling signals intended for other apps.




// Add manufacturer data.
var manufacturerData = new BluetoothLEManufacturerData();
manufacturerData.CompanyId = 0xFFFE;
DataWriter writer = new DataWriter();
writer.WriteUInt16(0x1234);
manufacturerData.Data = writer.DetachBuffer();
_trigger.AdvertisementFilter.Advertisement.ManufacturerData.Add(manufacturerData);

// Add signal strength filters and sampling interval.
_trigger.SignalStrengthFilter.InRangeThresholdInDBm = -65;
_trigger.SignalStrengthFilter.OutOfRangeThresholdInDBm = -70;
_trigger.SignalStrengthFilter.OutOfRangeTimeout = TimeSpan.FromSeconds(2);
_trigger.SignalStrengthFilter.SamplingInterval = TimeSpan.FromSeconds(1);


Next, you’ll need to add the background task to the app’s manifest file. As with other background processes, this step is required for UWP apps to execute code when the user doesn’t have the app on screen. For more information on registering background tasks in the manifest, see Create and register a background task.

The final step is to create the actual background task entry point. This code, which lives in a separate Windows Runtime Component within the solution, contains the method that fires when a Bluetooth LE beacon is received. In the Hue sample’s case, that entails checking whether the user is moving closer to the publisher (signal strength increasing, user is coming home – turn on the lights!) or leaving range (signal strength dropping off – turn off the lights).



private IBackgroundTaskInstance backgroundTaskInstance;
private BackgroundTaskDeferral _deferral;
private Bridge _bridge; 
private IEnumerable&lt;Light&gt; _lights;

public async void Run(IBackgroundTaskInstance taskInstance)
{
	backgroundTaskInstance = taskInstance;
	var details = taskInstance.TriggerDetails as BluetoothLEAdvertisementWatcherTriggerDetails;
	if (details != null)
	{
		_deferral = backgroundTaskInstance.GetDeferral();
		taskInstance.Canceled += (s, e) =&gt; _deferral.Complete();

		var localStorage = ApplicationData.Current.LocalSettings.Values;
		_bridge = new Bridge(localStorage[&quot;bridgeIp&quot;].ToString(), localStorage[&quot;userId&quot;].ToString()); 
		try
		{
			_lights = await _bridge.GetLightsAsync();
		}
		catch (Exception)
		{
			_deferral.Complete();
			return; 
		}
		foreach(var item in details.Advertisements)
		{
			Debug.WriteLine(item.RawSignalStrengthInDBm);
		}

		// -127 is a BTLE magic number that indicates out of range. If we hit this, 
		// turn off the lights. Send the command regardless if they are on/off
		// just to be safe, since it will only be sent once.
		if (details.Advertisements.Any(x =&gt; x.RawSignalStrengthInDBm == -127))
		{
			foreach (Light light in _lights)
			{
				light.State.On = false;
				await Task.Delay(250);
			}
		}
		// If there is no magic number, we are in range. Toggle any lights reporting
		// as off to on. Do not spam the command to lights already on. 
		else
		{
			foreach (Light light in _lights.Where(x =&gt; !x.State.On))
			{
				light.State.On = true;
				await Task.Delay(250);
			}
		}
		// Wait 1 second before exiting to ensure all HTTP requests have sent.
		await Task.Delay(1000);
		_deferral.Complete();
	}
}


Now, the next time the user fires up the app and enables the background watcher, it’s ready to start listening and can control the Hue lights accordingly.

Extended splash screen

Windows provides a default splash screen that works for most scenarios. However, if your app needs to perform long-running initialization tasks before it shows its main page (like searching for Hue lightbulbs on the network), users might get annoyed staring at a static image for too long (or worse, think that your app is frozen). To counteract this problem, developers can choose to show a customizable extended splash screen after the general app splash screen, where you can display anything from a basic progress wheel to intricate loading animations.

The Hue sample doesn’t need much extra time, so we chose something simple, but the framework is the same either way. When the app starts, it first displays the regular splash screen. After a few seconds though, instead of taking the user to a page with controls or text, it instead navigates to an intermediate empty page containing a copy of the default splash screen image overlaid with a spinning progress ring. This extended splash screen continues to display until all the initialization code is finished running, at which point the user is navigated to the main light controls. While this behavior is fairly simple, it makes the app seem more responsive.

Getting started with an extended splash screen in your app begins with the creation of a blank page containing only a XAML canvas to hold your splash image. For a smooth transition, we kept this image the same as the app’s standard splash image, but it doesn’t have to be.



    &lt;Canvas Grid.Row=&quot;0&quot; Grid.RowSpan=&quot;2&quot; x:Name=&quot;SplashCanvas&quot;  Background=&quot;White&quot;&gt;
        &lt;Image x:Name=&quot;extendedSplashImage&quot; Source=&quot;Assets/splash.png&quot;/&gt;
    &lt;/Canvas&gt;


Once you have your page, you need to modify the code-behind to prepare the splash screen and kick off your long-running initialization tasks (in our case, that meant finding the Hue bridge on the network, connecting to it, and then searching for lightbulbs). We also chose to include a bit of extra code to make sure the splash screen looks good on both phone and PC, and that it responds to device orientation.

To see our implementation, take a look on GitHub here.

The last step to hook things up is to modify the app’s App.xaml.cs file so that it navigates to the splash screen when the app starts instead of the MainPage.



        protected override void OnLaunched(LaunchActivatedEventArgs e)
        {
            var initializer = new Initializer(e.SplashScreen);
            Window.Current.Content = initializer;
            Window.Current.Activate();
        }


You can get quite fancy with extended splash screens – we only covered the basics – but even a simple extended splash screen provides a massively improved experience and helps keep impatient users happy.

Additional resources

If you found this app interesting, we’ve got similar app samples out there you might want to check out:

  • TrafficApp – A traffic monitor sample app demonstrating maps and location
  • RSSReader – An RSS aggregator demonstrating basic MVVM usage
  • QuizGame – A peer-to-peer trivia game demonstrating networking and discovery APIs
  • BluetoothAdvertisment – API sample demonstrating sending and receiving Bluetooth Low Energy advertisements
  • CortanaVoiceCommand – API sample showing how to integrate Cortana into an app

Written by Joshua Partlow, Alexander Koren, and Lauren Hughes from the Windows Developer Docs team