Tag Archives: XBOX

Announcing Project Rome iOS SDK

Project Rome is a platform for enabling seamless cross-device and cross-platform experiences. The philosophy behind Project Rome is simple. App experiences shouldn’t be tied to a single device any more than data should be tied to a single device. Your apps, like your data, should travel with you.

Previously, this meant switching between devices, while maintaining a single user experience, on a different Windows device. A few months ago, Project Rome features were extended to the Android platform, allowing you to start an app session on your Android phone and continue it on a Windows PC, an Xbox One or even a Surface Hub.

Now, Project Rome support is also being extended to the iOS platform. You can download the Project Rome SDK for iOS here.

Revisiting the Contoso music app

If you have been following the evolution of Project Rome, you’ll be familiar with our developer friend Paul and his example Contoso Music app. Paul was originally introduced in a blog post on Cross-device experiences to help us understand a typical Project Rome scenario.

He expanded his UWP music streaming app to run across multiple Windows devices tied to the same Microsoft Account (MSA). Using Project Rome, Paul changed how his app worked so a user streaming a song on a Windows PC could then transfer that song to his Xbox. Then, as he got ready to go out for a run, he could transfer the current playlist to his Windows Phone.

In the subsequent post, Paul developed an Android version of Contoso Music app and used the Project Rome Android SDK to allow a user to start playing a song on her Android phone and continue playing it on a Windows device when he or she got home. The Contoso Music app was now cross-platform, transferring smoothly from one platform to the next.

Extending to iOS

Let’s imagine that based on the success of his Windows and Android versions, Paul develops an iOS version of Contoso Music. When examining his telemetry after a few months, Paul sees that all his apps are doing well, like his Windows and Android versions. However, there is a common theme in the user feedback; users are finding it difficult handling device switching. So, Paul wants to enable a scenario in which a user can listen to music on the iPhone over headphones, then enter the living room and immediately switch to playing the same music over his Xbox, connected to quality speakers.

With the Project Rome iOS SDK, Paul can create a bridge between iOS devices and Windows devices in two stages:

  • The RemoteSystems API allows the app to discover Windows devices the user owns. The RemoteSystems API will allow the Contoso Music app to discover these devices on the same network or through the cloud.
  • Once discovered, the RemoteLauncher API will launch the Contoso Music app on another Windows device.

How Paul gets it done

In order for Paul’s user to switch from playing music on an iOS device to a Windows device, his app must find out about the other device. This action requires using MSA OAuth to get permission to query for devices and then attempting to discover additional devices, as shown in the diagram below.


// Asynchronously initialize the Rome Platform.
  // Pass in self as class implements the CDOAuthCodeProviderDelegate protocol.
  [CDPlatform startWithOAuthCodeProviderDelegate:self completion:^(NSError* clientError) {
                                              if (clientError)
                                              {
                                                  // Handle error
                                                  return;
                                              }

                                              // Handle success, show discovery screen
                                      }];

// Implementation of CDOAuthCodeProviderDelegate
// The Rome SDK calls this delegate method when it needs an OAuth Access Code from the application.
- (NSError*)getAccessCode:(NSString*)signinUrl completion: (void (^)(NSError* error, NSString* accessCode))completion {

// Stash away the callback the SDK gives us
_getTokenCallback = completion;

  // Show the interactive OAuth Web View flow.
  // Once the OAuth flow completes or fails, invoke this callback.
  ...

// Return nil as there was no error
  return nil;
}

Once initialized, Paul’s app can discover all devices in the user’s MSA device graph by initiating discovery using CDRemoteSystemDiscoveryManager. Information about discovered devices are raised through the CDRemoteSystemDiscoveryManagerDelegate protocol. In In our example, we store each discovered device within an NSMutableArray property called discoveredSystems.


// Create instance and pass ‘self’ as the delegate as it implements CDRemoteSystemDiscoveryManagerDelegate.
CDRemoteSystemDiscoveryManager* remoteSystemDiscoveryManager = [[CDRemoteSystemDiscoveryManager alloc] initWithDelegate:self];

// Start discovery.
[remoteSystemDiscoveryManager startDiscovery];

// CDRemoteSystemDiscoveryManagerDelegate implementation
- (void)remoteSystemDiscoveryManager:
            (CDRemoteSystemDiscoveryManager*)discoveryManager
                             didFind:(CDRemoteSystem*)remoteSystem {
  @synchronized(self) {
     [self.discoveredSystems addObject:remoteSystem];
      // Refresh UI based upon updated state in discoveredSystems e.g. populate table
   }
}

- (void)remoteSystemDiscoveryManager:
            (CDRemoteSystemDiscoveryManager*)discoveryManager
                           didUpdate:(CDRemoteSystem*)remoteSystem {
  NSString* id = remoteSystem.id;

// Loop through and update the Remote System instance if previously seen.
  @synchronized(self) {
    for (unsigned i = 0; i < self.discoveredSystems.count; i++) {
      CDRemoteSystem* currentRemoteSystem =
          [self.discoveredSystems objectAtIndex:i];
      NSString* currentId = currentRemoteSystem.id;

      if ([currentId isEqualToString:id]) {
        [self.discoveredSystems replaceObjectAtIndex:i withObject:remoteSystem];
        break;
      }
    }

       // Refresh UI based upon updated state in discoveredSystems e.g. populate table
  }
}

The user can now select the device he wants to transfer music to from the list of devices that have been discovered. From the selected CDRemoteSystem, an instance of CDRemoteSystemConnectionRequest is instantiated as shown in the sequence diagram below. Using CDRemoteLauncher, Paul is then able to remotely launch the app on the selected device while also including necessary additional contextual information, such as the song currently playing.

Here’s how to remote-launch http://www.bing.com to your device:


// Create a connection request using the CDRemoteSystem instance selected by the user
  CDRemoteSystemConnectionRequest* request =
       // Using the RemoteSystem above, [self.discoveredSystems addObject:remoteSystem];
      [[CDRemoteSystemConnectionRequest alloc] initWithRemoteSystem:system];

NSString* url = @”http://www.bing.com”;

  [CDRemoteLauncher
           launchUri:uri
           withRequest:request
           withCompletion:^(CDRemoteLauncherUriStatus status) {
            // Update UI on launch status
            }];

Voila! Paul has easily augmented his app with cross-device support for iOS.

Wrapping up

Project Rome breaks down barriers by changing notions about what an “app” is and focusing on the user no matter where they are working or what device they are using. An app no longer necessarily means something that is tied to a given device, instead it can be something that exists between your devices and is optimized for the right device at the right time. Today, Project Rome works on Windows 10, Android and iOS. Stay tuned to see what comes next.

To learn more about Project Rome, check out the links below.

Read More

Windows Store: more options to manage, monetize and promote apps

At Build 2017 last week, the Windows Store announced new capabilities to reach more customers, improve your productivity and promote and monetize your apps and games, including:

  • Offering your games to Xbox One users
  • Updating your Store listings faster via import/export
  • Releasing new games or apps using private beta, targeting a limited audience
  • Navigating Dev Center faster through an updated dashboard experience
  • Enabling more users to acquire apps via one-click download with no Microsoft account login
  • Offering more engaging Store listings with video trailers
  • Monetizing via recurring billing using in-app purchase subscriptions
  • Offering discounts only to some user segments, or only to users of your other apps or games
  • Analyzing your app performance more effectively, through funnel analysis and crash analytics
  • Earning more revenue from ads through more advertising formats

To learn more, I recommend viewing the Build session Windows Store: manage and promote apps your way (B8098), and reading this blog post.

More opportunity for your apps and games

Your UWP apps and games can run on any Windows 10 device, so you can reach out to hundreds of millions of users with a single app. The Store helps you grow that opportunity, reach more customers, acquire new users and increase the revenue for those users with several new capabilities. View the Build 2017 session Tips and Tricks for Successful Store Apps (B8102) to learn how to best use these new capabilities.

Increase your revenue through in-app advertising (New). Advertising is one of the primary monetization models for many publishers, and the Store now offers several new ad experiences that bring better yield and higher fill rates for ads in UWP apps: interstitial banner ads, playable ads and native ads (beta), in addition to the existing banner and video ads. To learn more, view Build session A quick lap around Microsoft Monetization Platform (P4112).

Example of a playable ad running in a UWP game

Promote your apps, and drive re-engagement using ad campaigns (New). Dev Center offers the ability to acquire new users in several ways: promotional codes, targeted offers and ad campaigns. Creating an ad campaign requires few clicks, and now supports interstitial banner, native and playable ads (beta). These ad campaigns are shown to users on other apps, as well as on Microsoft properties such as MSN.com, Skype and Outlook. To learn more, view the Build session User acquisition through Ads (session P4154).

Acquire more customers through one-click download, and buy Xbox games on PC (New). The Store has enabled faster and simpler app acquisition by letting customers acquire free apps or games (with age rating 13-year old or lower) with one click, without requiring the user to sign in with their Microsoft account. In addition to this change, customers can now purchase Xbox games directly from the PC Store. These new options help grow the number of users that download your app or game.

Distribute UWP games on Xbox One, engaging with hundreds of millions of Xbox One users and more than 50 million Xbox Live accounts (Coming soon). Dev Center already allows any developer to publish apps not categorized as games to Xbox One. Developers can now join the new Xbox Live Creators Program to submit games to Xbox One, with fast certification, no cost and no friction. You can start developing and testing your Xbox Live enabled games today, and you’ll be able to publish games for Xbox One in summer. View the Xbox Live Creators Program build session (P4159) to learn more.

Offer in-app purchase subscriptions (Coming soon). Apps can be configured to include in-app subscriptions, i.e. selling services in-app that charge periodically (1/3/6/12/24-month renewal periods), with or without a free trial. In-app subscription capability is currently in preview, being tested by a few publishers and will be available to all developers this summer. Follow the Building apps for Windows blog for more announcements.

In addition to these features, remember that you can offer your app or game to businesses or education institutions through the Microsoft Store for Business that offers a redesigned private Store experience for companies. You can also take your existing Win32 app or game and offer it through the Windows Store using the Desktop Bridge.

Dev Center experience redesign

More modern and efficient dashboard experience (Dev Center Insiders). The Dev Center dashboard has been redesigned based on your feedback to help you be more productive. The new dashboard integrates with Office, Cortana and Groove programs. It has a clean new interface, beautiful analytics, new account-level pages, integrated app picker and streamlined program switching.  These are a few of the things that make the new dashboard more useful, particularly for accounts with multiple apps, games or programs. Try it out today by joining the Dev Center Insider Program.

Startup guide for the new dashboard experience

Invite users outside of your organization to collaborate on your apps and games (New). We now support inviting users outside your company to contribute to the projects in your account. This makes collaboration and partnerships across companies and users easier than ever. Your account users are governed by the same roles and permissions that you apply to users in your AAD tenant, ensuring that you remain in full control.


Reaching more customers

To help customers find your app or game, and then increase the probability they will download it, the Store has added new search filters, and ways to make your store listing more engaging, and ways to update your Store listings in bulk, streamlining the update in multiple languages.

Help customers find your apps or games with new search capabilities (Rollout starting). Starting today, we’re rolling out the option to indicate if your app uses Cortana, Ink or Continuum, or if your games offer mixed reality, 4K, HDR, multi-player, co-op, or shared split screen. Indicate in the Properties page of your submission if it supports these capabilities, and this summer customers will be able to filter their searches to show only apps or games that support the capabilities they are looking for.

Search filters that will show up in the Store later in the summer

Create more engaging Store listings with video trailers (Rollout starting). Many of you have told us that video trailers are one of the best ways to attract customers. After piloting the feature earlier this year, today we’re beginning to roll out the ability to upload trailers to use in your Store listing, and all accounts should have access within a few months. We’ve made a few other updates to the types of images you can provide for a great Store listing, including 4K high-resolution assets.

Create and update Store listings faster with Import/Export (Coming soon). Creating and updating Store listings takes many clicks per language and can take hours for a submission with listings in many languages. Dev Center allows you to import and export your listings, so you can make changes offline in bulk and then upload all your listing details at once, rather than having to manually input each string and image for each language. We’ll be rolling this feature out to all accounts over the next few months.

Submission page showing progress using import/export Store listings

Planning your release

Once you have created your app submission and designed an engaging Store listing, you’ll have to plan the release. The Store supports several visibility options, including a release only accessible through promotional codes, a public but not searchable release (hidden), public release or flighting different packages of your published app to specific groups of people. Flighting is widely used, with more than 30,000 package flights created so far. We’re adding additional options to let you release private betas, and to schedule a release very precisely.

Release a submission at a precise date and time (Rollout starting). Dev Center previously let you define when a submission would start to be published, but didn’t let you know exactly when the submission would be live. You can now specific a precise date and time, in UTC or local time, during your submission. We are beginning to roll out the new Schedule options today, and all accounts should have access within a few months.

Offer a private beta (Coming soon). Soon you’ll be able to publish a new app or game that is only visible to people in a specific group that you define; anyone who’s not in your beta group won’t be able to see the Store listing page or download the product. This feature is being used by selected preview apps, and we will be releasing this feature to all developers within the next few months.

Remember that you can also use the Store service APIs to streamline your app management. There are APIs to submit and release apps, games and add-ons, manage package flights, access analytics information, read and respond to reviews and run ad campaigns.

Optimizing pricing and configuring deals and sales

Once your apps and games are published to the Store, you may want to adjust your app or add-on price, grow your customer engagement or offer sales to attract more customers.

Create new types of customer segments (New). Several features in Dev Center support segments, including sales, targeted offers, notifications and analytics. You can now create segments by market breakdown, purchase amount and if a user has rated the app or not. Coming soon, you’ll be able to use a predictive churn model to create segments of users who are predicted to stop using your app (churn), so you can take a preventive approach.

Show content or pricing targeted to a specific segment (New). The new targeted offers feature lets you target specific segments of your customers with attractive, personalized content inside the app to increase engagement, retention and monetization. An example is discounting certain in-app products for customers who are first time payers. For more info and a demo of how to use this feature, view the Build session Maximizing user engagement with customized app content (P4116), or read more.

Control your price more precisely and schedule changes (Rollout starting). You can precisely schedule price changes for customers on Windows 10 over time, using the currency that makes sense to you, to occur globally or for specific markets. Rollout of this feature starts today and will finish within a few months.

Increase customer acquisition and engagement with more flexible sale pricing (Rollout starting). We’ve added more options to sale pricing to let you configure discounts by percentage (e.g. 40% off), offer discounts to customers that own one of your other apps (such as “50% off if you own this other game”), target a discount to a segment of users (e.g. offer discount to users that have not rated the game) and even use a new segment of customers that have never purchased anything in the Store. When a customer makes their first purchase, we’ve found that they typically continue to purchase more items in that initial app or game, as well as in other products in the Store. Rollout of advanced sale pricing starts today, and all accounts should have access to these features by summer. Note that when you offer a discount to a segment of your customers, you can also use our targeted notifications feature to alert those customers about the discount. Watch the Build 2017 session Maximizing revenue through advanced pricing, sales and scheduling (P4116) to learn more.

Dev Center configuration 

How sale pricing appears in the Store

View all possible price tiers in Excel (Rollout starting). While adjusting prices, many of you have asked to be able to have an easier way to view the price tiers in all the currencies. The Pricing and availability page now offers the option to download the price table in .csv (editable in Excel). Rollout starts today, and all accounts should have access to download the price table in a few months.

Improving analysis

Once your app is published and live, you’ll want to analyze its performance, to help adjust your listing or app to improve acquisitions, satisfaction or engagement. Our new analytics capabilities let you analyze multiple apps more effectively, identify areas of improvement in the conversion funnel, improve debugging and in general find patterns and trends to improve your app.

View analytics for multiple apps, using a modern design (Dev Center Insiders). Along with the release of the new dashboard experience, we have refreshed and enhanced our analytics features to bring you better insights. The new Analytics Overview quickly summarizes key reports like Acquisitions, Usage, Installs and Health, and you can select up to 5 apps to view at one time. You can get an early look at this new design by joining the Dev Center Insider Program.

Analyze your customer conversion funnel (Dev Center Insiders). The acquisition funnel shows the total number of customers that complete each stage of the funnel—from viewing the Store page to using the app, along with conversion rate. You can filter by customer demographics and custom campaigns to compare campaign effectiveness. The report is for Windows 10 customers over the last 90 days, and Page views also includes views from people who are not signed in with a Microsoft account. Try it out now by joining the Dev Center Insider Program.

Automatically receive alerts when there are anomalies in acquisition trends (Dev Center Insiders). It’s often easy to miss significant changes. To help you monitor data changes, you’ll get an email alerting you when we detect a significant trend change with your acquisitions. We’ll also include your app’s average rating for the last 10 days so you can see if they’ve been impacted. You can then use the Health and other reports to identify urgent fixes to address, or you can respond to reviews to help drive your ratings back up. To receive these emails now, join the Dev Center Insider Program.

Debug your apps more effectively Analyzing crashes and debugging apps is critical for improving the performance and quality of your apps and games. Today, the Health report lets you pinpoint which OS and app version configurations generate the most crashes, and link to failure details with individual stack traces.  This summer we’ll roll out the ability to download CAB files for crashes that occur on devices that participate in the Windows Insider program.

Understand usage and analyze by cohorts (Coming soon). The Usage report helps you understand how often and how long users are using an app, and measures interactive engagement time across active users and active devices, using industry-standard DAU/MAU metrics and retention. The report will soon include cohort analytics to help you understand usage drop-off over time. Join the Dev Center Insider program to be ready to use this analysis when it rolls out to all accounts within the next few months.

What comes next?

We hope you’ll take advantage of these resources and learn more about the capabilities described in this blog by doing the following:

Keep giving us feedback to help us prioritize features and updates. Use the feedback link in Dev Center, which you’ll now find in the upper right of the dashboard (if you’re using the new dashboard experience as part of the Dev Center Insider Program).

Read More

Cortana Skills Kit empowers developers to build intelligent experiences for millions of users

Today, we are pleased to announce the public preview of the Cortana Skills Kit which allows developers to easily create intelligent, personalized experiences for Cortana.

Our vision for Cortana has always been to create a digital personal assistant that’s available to users across all their devices, whenever and wherever they may need an extra hand to be more productive and get things done. With the new Cortana Skills Kit, developers can join in delivering that vision and reach millions of Cortana users across platforms including Windows 10, Android, iOS and soon on even more devices and form factors — like Xbox, the Harman Kardon Invoke smart speaker and inside cars and mixed reality devices.

To build a Cortana skill, developers can create their bot’s conversational logic using the Microsoft Bot Framework, and publish it to the new Cortana Channel within the Bot Framework, bringing speech capabilities to skills. Developers can understand users’ natural input and build custom machine-learned language models through LUIS.ai, and add intelligence with the power of Cognitive Services.

Cortana has rich knowledge and understanding about the user with the Skills Kit. Developers can now access knowledge about the user and build highly-relevant, personalized experiences based on the user’s preferences and context. Cortana only shares information with the user’s consent.

We realize that we are at the dawn of building conversational experiences for end users. Developers want to reach a large and diverse set of users to understand user needs and behaviors. There are over 145M monthly active users of Cortana worldwide. With the Cortana Skills Kit, developers can immediately reach the 60M users in the US and grow their international reach in the future*. To start building skills today, please visit https://developer.microsoft.com/en-us/Cortana.

We are also excited to announce a wide range of partners who have joined us on this journey and are building Cortana skills. Cortana users will be able to access skills from OpenTable, Expedia, Capital One, StubHub,  Food Network, HP, iHeartRadio, Stubhub, Dominos, TuneIn, Uber, CapitalOne, Knowmail, MovieTickets.com, Tact, Skyscanner, Fresh Digital, Gigskr, Gupshup, The Motley Fool, Mybuddy, Patron, Porch, Razorfish, StarFish Mint, Talklocal, UPS, WebMD, Pylon, BigOven, CityFalcon, DarkSky, Elokence, BLT Robotics, Wed Guild, AI Games, XAPP Media,  GameOn, MegaSuperWeb, Verge and Vokkal.co.

To learn more and discover the currently available skills visit: https://www.microsoft.com/en-us/windows/cortana/cortana-skills/

*Available in US only. Other markets will be added over time.

Read More

ICYMI – Your weekly TL;DR

Busy weekend of coding ahead? Get the latest from this week in Windows Developer before you go heads down.

Standard C++ and the Windows Runtime (C++/WinRT)

The Windows Runtime (WinRT) is the technology that powers the Universal Windows Platform, letting developers write applications that are common to all Windows devices, from Xbox to PCs to HoloLens to phones. Check out how most of UWP can also be used by developers targeting traditional desktop applications.

New Year, New Dev – Windows IoT Core

Learn how easy it is to start developing applications to deploy on IoT devices such as the Raspberry Pi 3.

Project Rome for Android Update: Now with App Services Support

Project Rome developers have had a month to play with Project Rome for Android SDK, and we hope you are as excited about its capabilities as we are! In this month’s release, see what support we bring for app services.

How the UWP Community Toolkit helps Windows developers easily create well-designed and user-friendly apps

In August 2016, we introduced the open-source UWP Community Toolkit and we recently caught up with two developers who have used the toolkit to help create their apps. Check out what they had to say.

Download Visual Studio to get started.

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Read More

Project Rome for Android Update: Now with App Services Support

Project Rome developers have had a month to play with Project Rome for Android SDK (Android SDK), and we hope you are as excited about its capabilities as we are! In this month’s release, we are thrilled to bring you support for app services. Before, we offered the ability to launch a URI from an Android device onto a Windows device. However, the SDK was limited to sending a URI. With the introduction of app services, now you can easily message between Android and Windows devices.

What are App Services?

In short, app services allow your app to provide services that can be interacted with from other applications. This enables an Android application to invoke an app service on a Windows application to perform tasks behind the scenes. This blog post is focused on how to use app services between Android to Windows devices. For a deeper look at app services on Windows, go here.

Messaging Between Connected Devices

Let’s circle back to the example in the original blog post. Paul is an app developer that has integrated the Android SDK into his app. He had created his Contoso Music App, giving users the ability to launch the app across devices, without skipping a beat. That experience was enabled using the RemoteLaunch APIs. It has been a great feature for his app. Paul has an Android phone and listens to music while he goes out for a run. When he gets home, he can easily launch the app on his Xbox—with surround sound speakers—to continue playing with a higher quality sound.

As Paul moves about the home he often finds it frustrating that he has to go back to the Xbox to control the music. On a typical day he loads a playlist but finds himself jumping around from song to song, depending on his mood. This is where app services comes in.

Now, Paul can add the ability to control the music app running on his Xbox from his Android phone. This works very well for Paul because he’s always carrying his phone with him, so it’s much more convenient than having to go to the Xbox every time he wants to change the song.  Once the Android app establishes an AppServiceClientConnection, messaging can flow between devices.

Here’s a look at the Android SDK app services in code.

First, you must discover devices, using RemoteSystemDiscovery for the connectionRequest:


// Create a RemoteSystemDiscovery object with a Builder
RemoteSystemDiscovery.Builder discoveryBuilder;

// Implement the IRemoteSystemDiscoveryListener to be used for the callback
discoveryBuilder = new RemoteSystemDiscovery.Builder().setListener(new IRemoteSystemDiscoveryListener() {
    @Override
    public void onRemoteSystemAdded(RemoteSystem remoteSystem) {
        Log.d(TAG, "RemoveSystemAdded = " + remoteSystem.getDisplayName());
        devices.add(new Device(remoteSystem));        
    }
});

// Start discovering devices
startDiscovery();
	 

Second, establish an AppServiceClientConnection. The IAppServiceClientConnectionListener handles the status of the connection, while the IAppServiceResponseListener handles the response to the message.

AppServiceClientConnection


// Create an AppServiceClientConnection
private void connectAppService(Device device) {
        _appServiceClientConnection = new AppServiceClientConnection(APP_SERVICE,
            APP_IDENTIFIER,
            new RemoteSystemConnectionRequest(device.getSystem()),
            new AppServiceClientConnectionListener(),
            new AppServiceResponseListener());

AppServiceClientConnection callback


// Implement the IAppServiceClientConnectionListener used to callback  
// the AppServiceClientConnection 
private class AppServiceClientConnectionListener implements IAppServiceClientConnectionListener {

	// Handle the cases for success, error, and closed connections
        @Override
        public void onSuccess() {
            Log.i(TAG, "AppService connection opened successful");            
        }

        @Override
        public void onError(AppServiceClientConnectionStatus status) {
            Log.e(TAG, "AppService connection error status = " + status.toString());
        }

        @Override
        public void onClosed() {
            Log.i(TAG, "AppService connection closed");            
        }
    }
	 

AppServiceClientResponse callback


// Implement the IAppServiceResponseListener used to callback  
// the AppServiceClientResponse
private class AppServiceResponseListener implements IAppServiceResponseListener() {
    @Override
    public void responseReceived(AppServiceClientResponse response) {
        AppServiceResponseStatus status = response.getStatus();

        if (status == AppServiceResponseStatus.SUCCESS)
        {
            Bundle bundle = response.getMessage();
            Log.i(TAG, "Received successful AppService response");

            String dateStr = bundle.getString("CreationDate");

            DateFormat df = new SimpleDateFormat(DATE_FORMAT);
            try {
                Date startDate = df.parse(dateStr);
                Date nowDate = new Date();
                long diff = nowDate.getTime() - startDate.getTime();
                runOnUiThread(new SetPingText(Long.toString(diff)));
            } catch (ParseException e) {
                e.printStackTrace();
            }
        }
        else
        {
            Log.e(TAG, "Did not receive successful AppService response);
        }
    }
}
	 

Xamarin

That’s not all: we have updated the Xamarin for Android sample with app services, too.

From the sample, these two functions are used in the RemoteSystemActivity class to connect, and then ping, via app services.

AppServiceClientConnection


private async void ConnectAppService(string appService, string appIdentifier, RemoteSystemConnectionRequest connectionRequest)
{
    // Create AppServiceClientConnection
    this.appServiceClientConnection = new AppServiceClientConnection(appService, appIdentifier, connectionRequest);
    this.id = connectionRequest.RemoteSystem.Id;

    try
    {
        // OpenRemoteAsync returns a Task<AppServiceClientConnectionStatus>
        var status = await this.appServiceClientConnection.OpenRemoteAsync();
        Console.WriteLine("App Service connection returned with status " + status.ToString());
    }
    catch (ConnectedDevicesException e)
    {
        Console.WriteLine("Failed during attempt to create AppServices connection");
        e.PrintStackTrace();
    }
}
	 

SendMessageAsync


private async void SendPingMessage()
{
    // Create the message to send
    Bundle message = new Bundle();
    message.PutString("Type", "ping");
    message.PutString("CreationDate", DateTime.Now.ToString(CultureInfo.InvariantCulture));
    message.PutString("TargetId", this.id);

    try
    {
        var response = await this.appServiceClientConnection.SendMessageAsync(message);
        AppServiceResponseStatus status = response.Status;

        if (status == AppServiceResponseStatus.Success)
        {
            // Create the response to the message
            Bundle bundle = response.Message;
            string type = bundle.GetString("Type");
            DateTime creationDate = DateTime.Parse(bundle.GetString("CreationDate"));
            string targetId = bundle.GetString("TargetId");

            DateTime nowDate = DateTime.Now;
            int diff = nowDate.Subtract(creationDate).Milliseconds;

            this.RunOnUiThread(() =>
            {
                SetPingText(this as Activity, diff.ToString());
            });
        }
    }
    catch (ConnectedDevicesException e)
    {
        Console.WriteLine("Failed to send message using AppServices");
        e.PrintStackTrace();
    }
}
	 

All documentation and code for both Java and Xamarin can be found on our GitHub here.

Staying Connected with Project Rome

The power of the Project Rome platform is centered around connecting devices (both Windows and Android). With the introduction of app services functionality into the Android SDK, we continue to provide the tools developers need to create highly compelling experiences.

To learn more about the capabilities of the Android SDK, browse sample code and get additional resources related to the platform, check out the information below:

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Read More

ICYMI – Your weekly TL;DR

Busy coding weekend ahead? Before you go heads-down, get the latest from this week in Windows Developer below.

Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens

The platform game genre has undergone many revolutions – and with mixed reality and HoloLens, we all have the opportunity to expand the platform game yet again. What will you build?

Windows 10 SDK Preview Build 15042 Released!

A new Windows 10 Creators Update SDK Preview was released this week! Read about what’s new in 15042.

Announcing the Xbox Live Creators Program

The Xbox Live Creators Program was announced at GDC on Wednesday, starting with an Insider Preview that gives any developer the opportunity to publish Xbox Live-enabled games on Windows 10 PCs along with Xbox One consoles. Get the details here.

Just Released – Windows Developer Evaluation Virtual Machines – February 2017 Build

And last but not least – the February 2017 edition of evaluation Windows developer virtual machines on Windows Dev Center was just released. The VMs come in Hyper-V, Parallels, VirtualBox and VMWare flavors. Get ‘em all!

Download Visual Studio to get started.

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Read More

Announcing the Xbox Live Creators Program

Today at GDC we announced the launch of Xbox Live Creators Program, starting with an Insider Preview that gives any developer the opportunity to publish Xbox Live-enabled games on Windows 10 PCs along with Xbox One consoles.

The Creators Program provides game developers access to Xbox Live sign-in, presence and select social features that can all be integrated with their UWP games, and then they can publish their game to Xbox One and Windows 10. This means your title can be seen by every Xbox One owner across the Xbox One family of devices, including Project Scorpio this holiday, as well as hundreds of millions of Windows 10 PCs.

What do you get with the Xbox Live Creators Program?

First, we are opening publishing to the Xbox One console. With the Xbox Live Creators Program, you can ship your UWP game on Xbox One, Windows 10 PC, or simultaneously on both platforms. And because Xbox One offers players a curated store experience, games from the Creators Program will appear in a new, distinct Creators game section in the Store.

Second, we’re making it easy to integrate with Xbox Live using the Xbox Live Creators SDK.  Take advantage of the following capabilities:

  • Xbox Live sign-in and profile, including gamertag.
  • Xbox Live presence, recently played and activity feed.
  • Xbox Live social, including friends, Game Hubs, clubs, party chat, gameDVR and Beam broadcast.
  • Xbox Live leaderboards and feature stats.
  • Title Storage and Connected Storage.

Any developer who wants to take advantage of more Xbox Live capabilities and development and marketing support for their game should apply and enroll into the ID@Xbox program.

What tools can I develop with?

The Creators Program enables you to easily integrate Xbox Live into your existing UWP projects.  Supported game engines include Construct 2, MonoGame, Unity and Xenko, and they all create beautiful games. Others may also work. And, you can develop games for the console without a Dev Kit.

How do I get started?

  1. Join the Developer Preview at https://developer.microsoft.com/games/xbox/xboxlive/creator. This will give you access to Creator’s Program configuration pages is Dev Center.
  2. Download and start using the Xbox Live Creators SDK.

While the Xbox Live Creators Program is in limited release to insiders, you can integrate and configure services using the SDK and the Dev Center. However, you will not be able to publish to the Store. We’ll be enabling publishing in the near future, so stay tuned!

To learn more, browse sample code and ask questions, check out the following documentation and communities:

We’d love to hear your feedback!  Use the Xbox Live Creator’s Program UserVoice site to voice your suggestions.

Read More

Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens

The platform game genre has undergone constant evolution, from its earliest incarnations in Donkey Kong and Pitfall to recent variations like Flappy Bird. Shigeru Miyamoto’s Super Mario Bros. is recognized as the best platform game of all time, setting a high bar for everyone who came after. The Lara Croft series built on Shigeru’s innovations by taking the standard side-scrolling platformer and expanding it into a 3D world. With mixed reality and HoloLens, we all have the opportunity to expand the world of the platform game yet again.

Standard video game conventions undergo a profound change when you put a platformer in a mixed reality environment. First of all, instead of sitting in a chair and moving your character inside your display screen, you physically follow your character as he moves around the real world. Second, the obstacles your protagonist encounters aren’t just digital ones but also physical objects in the real world, like tables and chairs and stacks of books. Third, because every room you play in effectively becomes a new level, the mixed reality platform game never runs out of levels and every level presents unique challenges. Instead of comparing scores for a certain game stage, you will need to compare how well you did in the living room—or in Jane’s kitchen or in Shigeru’s basement.

In this post, you will learn how to get started building a platform game for HoloLens using all free assets. In doing so, you will learn the basics of using Spatial Mapping to scan a room so your player character can interact with it. You will also use the slightly more advanced features of Spatial Understanding to determine characteristics of the game environment. Finally, all of this will be done in the Unity IDE (currently 5.5.0f3) with the open source HoloToolkit.

Creating your game world with Spatial Mapping

How does HoloLens make it possible for virtual objects and physical objects to interact?  The HoloLens is equipped with a depth camera, similar to the Kinect v2’s depth camera, that progressively scans a room in order to create a spatial map through a technique known as spatial mapping. It uses this data about the real world to create 3D surfaces in the virtual world. Then, using its four environment-aware cameras, it positions and orients the 3D reconstruction of the room in correct relation to the player. This map is often visualized at the start of HoloLens applications as a web of lines blanketing the room the player is in. You can also sometimes trigger this visualization by simply tapping in the air in front of you while wearing the HoloLens.

To play with spatial mapping, create a new 3D project in Unity. You can call the project “3D Platform Game.” Create a new scene for this game called “main.”

Next, add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene.

The HoloToolkit provides lots of useful helpers and shortcuts for developing a HoloLens app. Under the HoloToolkit menu, there is a Configure option that lets you correctly rig your game for HoloLens. After being sure to save your scene and project, click on each of these options to configure your scene, your project and your capability settings. Under capabilities, you must make sure to check off SpatialPerception—otherwise spatial mapping will not work. Also, be sure to save your project after each change. If for some reason you would prefer to do this step manually, there is documentation available to walk you through it.

To add spatial mapping functionality to your game, all you need to do is drag the SpatialMapping prefab into your scene from HoloToolkit -> SpatialMapping -> Prefabs. If you build and deploy the game to your HoloLens or HoloLens Emulator now, you will be able to see the web mesh of surface reconstruction occurring.

Congratulations! You’ve created your first level.

Adding a protagonist and an Xbox Controller

The next step is to create your protagonist. If you are lucky enough to have a Mario or a Luigi rigged model, you should definitely use that. In keeping with the earlier promise to use only free assets, however, this guide will use the complimentary Ethan asset.

Go to the Unity menu and select Assets -> Import Package -> Characters. Copy the whole package into your game by clicking Import. Finally, drag the ThirdPersonController prefab from Assets -> Standard Assets -> Characters -> ThirdPersonCharacter -> Prefabs into your scene.

Next, you’ll want a Bluetooth controller to steer your character. Newer Xbox One controllers support Bluetooth. To get one to work with HoloLens, you’ll need to closely follow these directions in order to update the firmware on your controller. Then pair the controller to your HoloLens through the Settings -> Devices menu.

To support the Xbox One controller in your game, you should add another free asset. Open the Asset Store by clicking on Window -> Asset Store and search for Xbox Controller Input for HoloLens. Import this package into your project.

You can this up to your character with a bit of custom script. In your scene, select the ThirdPersonController prefab. Find the Third Person User Control script in the Inspector window and delete it. You’re going to write your own custom Control that depends on the Xbox Controller package you just imported.

In the Inspector window again, go to the bottom and click on Add Component -> New Script. Name your script ThirdPersonHoloLensControl and copy/paste the following code into it:


using UnityEngine;
using HoloLensXboxController;
using UnityStandardAssets.Characters.ThirdPerson;

public class ThirdPersonHoloLensControl : MonoBehaviour
{

    private ControllerInput controllerInput;
    private ThirdPersonCharacter m_Character;
    private Transform m_Cam;                
    private Vector3 m_CamForward;            
    private Vector3 m_Move;
    private bool m_Jump;                      

    public float RotateAroundYSpeed = 2.0f;
    public float RotateAroundXSpeed = 2.0f;
    public float RotateAroundZSpeed = 2.0f;

    public float MoveHorizontalSpeed = 1f;
    public float MoveVerticalSpeed = 1f;

    public float ScaleSpeed = 1f;


    void Start()
    {
        controllerInput = new ControllerInput(0, 0.19f);
        // get the transform of the main camera
        if (Camera.main != null)
        {
            m_Cam = Camera.main.transform;
        }

        m_Character = GetComponent<ThirdPersonCharacter>();
    }

    // Update is called once per frame
    void Update()
    {
        controllerInput.Update();
        if (!m_Jump)
        {
            m_Jump = controllerInput.GetButton(ControllerButton.A);
        }
    }


    private void FixedUpdate()
    {
        // read inputs
        float h = MoveHorizontalSpeed * controllerInput.GetAxisLeftThumbstickX();
        float v = MoveVerticalSpeed * controllerInput.GetAxisLeftThumbstickY();
        bool crouch = controllerInput.GetButton(ControllerButton.B);

        // calculate move direction to pass to character
        if (m_Cam != null)
        {
            // calculate camera relative direction to move:
            m_CamForward = Vector3.Scale(m_Cam.forward, new Vector3(1, 0, 1)).normalized;
            m_Move = v * m_CamForward + h * m_Cam.right;
        }


        // pass all parameters to the character control script
        m_Character.Move(m_Move, crouch, m_Jump);
        m_Jump = false;
    }
}

This code is a variation on the standard controller code. Now that it is attached, it will let you use a Bluetooth enabled Xbox One controller to move your character. Use the A button to jump. Use the B button to crouch.

You now have a first level and a player character you can move with a controller: pretty much all the necessary components for a platform game. If you deploy the project as is, however, you will find that there is a small problem. Your character falls through the floor.

This happens because, while the character appears as soon as the scene starts, it actually takes a bit of time to scan the room and create meshes for the floor. If the character shows up before those meshes are placed in the scene, he will simply fall through the floor and keep falling indefinitely because there are no meshes to catch him.

How ‘bout some spatial understanding

In order to avoid this, the app needs a bit of spatial smarts. It needs to wait until the spatial meshes are mostly completed before adding the character to the scene. It should also scan the room and find the floor so the character can be added gently rather than dropped into the room. The spatial understand prefab will help you to accomplish both of these requirements.

Add the Spatial Understanding prefab to your scene. It can be found in Assets -> HoloToolkit -> SpatialUnderstanding -> Prefabs.

Because the SpatialUnderstanding game object also draws a wireframe during scanning, you should disable the visual mesh used by the SpatialMapping game object by deselecting Draw Visual Mesh in its Spatial Mapping Manager script. To do this, select the SpatialMapping game object, find the Spatial Mapping Manager in the Inspector window and uncheck Draw Visual Mesh.

You now need to add some orchestration to the game to prevent the third person character from being added too soon. Select ThirdPersonController in your scene. Then go to the Inspector panel and click on Add Component -> New Script. Call your script OrchestrateGame. While this script could really be placed anywhere, attaching it to the ThirdPersonController will make it easier to manipulate your character’s properties.

Start by adding HideCharacter and ShowCharacter methods to the OrchestrateGame class. This allows you to make the character invisible until you are ready to add him to the game level (the room).


    private void ShowCharacter(Vector3 placement)
    {
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = true;
        m_Character.transform.position = placement;
        var rigidBody = GetComponent<Rigidbody>();
        rigidBody.angularVelocity = Vector3.zero;
        rigidBody.velocity = Vector3.zero;        
    }

    private void HideCharacter()
    {
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = false;
    }

When the game starts, you will initially hide the character from view. More importantly, you will hook into the SpatialUnderstanding singleton and handle it’s ScanStateChanged event. Once the scan is done, you will use spatial understanding to correctly place the character.


    private ThirdPersonCharacter m_Character;

    void Start()
    {
        m_Character = GetComponent<ThirdPersonCharacter>();
        SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged;
        HideCharacter();
    }
    private void Instance_ScanStateChanged()
    {
        if ((SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done) &&
    SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
         {
            PlaceCharacterInGame();
        }
    }

How do you decide when the scan is completed? You could set up a timer and wait for a predetermined length of time to pass. But this might provide inconsistent results. A better way is to take advantage of the spatial understanding functionality in the HoloToolkit.

Spatial understanding is constantly evaluating surfaces picked up by the spatial mapping component. You will set a threshold to decide when you have retrieved enough spatial information. Every time the Update method is called, you will evaluate whether the threshold has been met, as determined by the spatial understanding module. If it is, you call the RequestFinishScan method on SpatialUnderstanding to get it to finish scanning and set its ScanState to Done.


private bool m_isInitialized;
    public float kMinAreaForComplete = 50.0f;
    public float kMinHorizAreaForComplete = 25.0f;
    public float kMinWallAreaForComplete = 10.0f;
    // Update is called once per frame
    void Update()
    {
        // check if enough of the room is scanned
        if (!m_isInitialized && DoesScanMeetMinBarForCompletion)
        {
            // let service know we're done scanning
            SpatialUnderstanding.Instance.RequestFinishScan();
            m_isInitialized = true;
        }
    }

    public bool DoesScanMeetMinBarForCompletion
    {
        get
        {
            // Only allow this when we are actually scanning
            if ((SpatialUnderstanding.Instance.ScanState != SpatialUnderstanding.ScanStates.Scanning) ||
                (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding))
            {
                return false;
            }

            // Query the current playspace stats
            IntPtr statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr();
            if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0)
            {
                return false;
            }
            SpatialUnderstandingDll.Imports.PlayspaceStats stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats();

            // Check our preset requirements
            if ((stats.TotalSurfaceArea > kMinAreaForComplete) ||
                (stats.HorizSurfaceArea > kMinHorizAreaForComplete) ||
                (stats.WallSurfaceArea > kMinWallAreaForComplete))
            {
                return true;
            }
            return false;
        }
    }

Once spatial understanding has determined that enough of the room has been scanned to start the level, you can use spatial understanding one more time to determine where to place your protagonist. First, the PlaceCharacterInGame method, show below, tries to determine the Y coordinate of the room floor. Next, the main camera object is used to determine the direction the HoloLens is facing in order to find a coordinate position two meters in front of the HoloLens. This position is combined with the Y coordinate of the floor in order to place the character gently on the ground in front of the player.


private void PlaceCharacterInGame()
{
// use spatial understanding to find floor
SpatialUnderstandingDll.Imports.QueryPlayspaceAlignment(SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignmentPtr());
SpatialUnderstandingDll.Imports.PlayspaceAlignment alignment = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignment();

// find 2 meters in front of camera position
var inFrontOfCamera = Camera.main.transform.position + Camera.main.transform.forward * 2.0f;

// place character on floor 2 meters ahead
ShowCharacter(new Vector3(inFrontOfCamera.x, alignment.FloorYValue, 2.69f));

// hide mesh
var customMesh = SpatialUnderstanding.Instance.GetComponent<SpatialUnderstandingCustomMesh>();
customMesh.DrawProcessedMesh = false;
}

You complete the PlaceCharacterInGame method by making the meshes invisible to the player. This reinforces the illusion that your protagonist is running into and jumping over objects in the real world. The last thing needed to finish this game, level design, is something that is unfortunately too complex to cover in this platform.

Because this platform game has been developed in mixed reality, you have an interesting choice to make, however, as you design your level. You can do level design the traditional way using 3D models. Alternatively, you can also do it using real world objects which the character must run between and jump over. Finally, the best approach may involve even mixing the two.

Conclusion

To paraphrase Shakespeare, all the world’s a stage and every room in it is a level. Mixed reality has the power to create new worlds for us—but it also has the power to make us look at the cultural artifacts and conventions we already have, like the traditional platform game, in entirely new ways. Where virtual reality is largely about escapism, the secret of mixed reality may simply be that it makes us appreciate what we already have by giving us fresh eyes with which to look at them.

Read More

Real-Time Communications on the Universal Windows Platform with WebRTC and ORTC

Readers of this blog interested in Real-Time Communications are probably familiar with Google’s WebRTC project. From the WebRTC site:

“WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. The WebRTC components have been optimized to best serve this purpose.”

At Microsoft, we’ve seen tremendous support grow for WebRTC over the past five years. One of the most pivotal uses of WebRTC is building native video chat apps, which now reach more than one billion users.

Google’s native supported platforms for WebRTC include iOS, Android and traditional Win32 desktop apps. On Windows, Microsoft Edge already supports ORTC APIs and now supports WebRTC 1.0 APIs in Insider Preview builds on Desktop devices. For example, if you need to build a WebRTC app in HTML/JS targeted at desktop browsers or desktop web apps using the Web App Template, then Microsoft Edge and Windows web platform are a great choice.

But what if you want to write in C# or C++ and run WebRTC on Xbox, HoloLens, Surface Hub or Windows Phone, or write in HTML/JS and run on Raspberry Pi? What if you are using Google’s iOS and Android libraries and need bit-for-bit compatibility for your UWP application? What if you modify WebRTC source in your application and need to use those modifications in your Universal Windows Platform (UWP) application?

To fulfill these additional scenarios, we have ported and optimized WebRTC 1.0 for UWP. This is now available as an Open Source project on GitHub as well as in binary form as a NuGet package. The project is 100 percent compatible with Google’s source, enabling scenarios such as a WebRTC video call from Xbox running UWP to a Chrome browser on the Desktop.


WebRTC ChatterBox sample running as a native Windows 10 application.

Microsoft has also long been a supporter of the ORTC APIs and we work closely with the Open Peer Foundation to ensure optimal support of ORTC  for UWP apps. ORTC is an evolution of the WebRTC API, which gives developers fine-grained control over the media and data transport channels, and uses a standard JSON format to describe peer capabilities rather than SDP, which is unique to WebRTC.

ORTC was designed with WebRTC interoperability in mind and all media is wire-compatible with WebRTC. ORTC also includes an adapter that converts SDP to JSON and exposes APIs that match WebRTC. Those two considerations make it possible for developers to migrate from WebRTC to ORTC at their own pace and enable video calls between WebRTC and ORTC clients. ORTC for UWP is available both as an Open Source project on GitHub as well as a NuGet package.

The net result of combined UWP and Edge support for WebRTC 1.0 and ORTC is that all Windows 10 platforms support RTC and developers can choose the solution they prefer.

Let’s take a look at an example from our samples repository on GitHub.

DataChannel via ORTC

The DataChannel, part of both the WebRTC and ORTC specs, is a method for two peers to exchange arbitrary data. This can be very useful in IoT applications – for example, a Raspberry Pi may collect sensor data and relay it to a Mobile or HoloLens peer in real-time.  Keep in mind that while the sample code below uses ORTC APIs, the same scenario is possible via WebRTC.

To exchange messages between peers in ORTC, a few things must happen first (see MainPage.OpenDataChannel() in the sample code):

  1. The peers must exchange ICE candidates, a successful pair of which will be used to establish a peer-to-peer connection.
  2. The peers must exchange ICE parameters and start an ICE transport session – the underlying data path used for the peers to exchange data.
  3. The peers must exchange DTLS parameters. which includes encryption certificate and fingerprint data used to establish a secure peer-to-peer connection, and start a DTLS transport session.
  4. The peers must exchange SCTP capabilities and start an SCTP transport session. At this stage, a secure connection between the peers has been established and a DataChannel can be opened.

It’s important to understand two things about the above sequence. First, the data exchanges are in simple JSON, and as long as two peers can exchange strings, they can exchange all necessary data. Second, the identification of the peers and the exchange of these parameters, called signaling, is outside of the specification of ORTC and WebRTC by design. There are plenty of mechanisms available for signaling and we won’t go into them, but NFC, Bluetooth RFCOMM or a simple TCP socket server like the one included in the sample code, would suffice.

With the SCTP transport session established, the peers can open a Data Channel. The peer initiating the call creates an instance of an RTCDataChannel() passing the SCTP transport instance, and the remote peer receives the event RTCSctpTransport.OnDataChannel.  When the remote peer receives this event, the Data Channel has been established and the peers can send messages to each other.

The code below is an excerpt from MainPage.Signaler_MessageFromPeer() in the sample code. The string message contains data received from the peer via the signaling method (in this case, the TCP socket server):


var sctpCaps = RTCSctpCapabilities.FromJsonString(message);
 
if (!_isInitiator)
{
// The remote side will receive notification when the data channel is opened.  Send SCTP capabilities back to the initiator and wait.
_sctp.OnDataChannel += Sctp_OnDataChannel;
_sctp.Start(sctpCaps);
 
var caps = RTCSctpTransport.GetCapabilities();
_signaler.SendToPeer(peer.Id, caps.ToJsonString());
}
else
{
// The initiator has received SCTP caps back from the remote peer, which means the remote peer has already
// called _sctp.Start().  It's now safe to open a data channel, which will fire the Sctp.OnDataChannel event on the remote peer.
_sctp.Start(sctpCaps);
_dataChannel = new RTCDataChannel(_sctp, _dataChannelParams);
_dataChannel.OnMessage += DataChannel_OnMessage;
_dataChannel.OnError += DataChannel_OnError;
}

When the DataChannel has been established, the remote peer receives the OnDataChannel event. The parameter data for that event includes a secure DataChannel which is open and ready to send messages:


private void Sctp_OnDataChannel(RTCDataChannelEvent evt)
{
_dataChannel = evt.DataChannel;
_dataChannel.OnMessage += DataChannel_OnMessage;
_dataChannel.OnError += DataChannel_OnError;
 
_dataChannel.SendMessage("Hello ORTC peer!");
}

You can now freely exchange encrypted messages between the peers over the DataChannel. The signaling server is no longer required and that connection can be closed.

Real-time peer connectivity in Universal Windows Applications enables many exciting scenarios. We’ve seen developers use this technology to enable a remote peer to see what HoloLens users sees in real-time and interact with their 3D environment. Xbox developers have used the DataChannel to enable low-latency FPS style gaming. And one of our close collaborators, Blackboard, relies on the technology to stream classroom video feeds and enable collaboration in their Windows app. Check out our Universal Windows samples and the library source on GitHub – we look forward to your PRs!

Read More

Announcing Windows 10 Insider Preview Build 15019 for PC

Hello Windows Insiders!

Today we are excited to be releasing Windows 10 Insider Preview Build 15019 for PC to Windows Insiders in the Fast ring. FYI: This build does have some platform related bugs that will impact the ability to play popular games on your PC. These platform bugs are unrelated to the new gaming features such as Game Mode. We recognize that this is painful for those wanting to try out the new gaming features announced this week. We deliberated a lot on whether to release this build to Insiders with these issues, however we decided to go ahead and release it as we need feedback from Insiders on other areas of the OS. The team is working hard to get these platform bugs fixed and we plan to push the new gaming features again when we release a build that includes these fixes.

What’s new in Build 15019

Xbox Gaming Features: As part of Windows and Xbox Insider Programs, updates have rolled out this week bringing some awesome new features for gamers. For more details – check out this blog post on Xbox Wire.

  • Built-in Beam streaming: Beam is the easiest and quickest way to stream gameplay, and it’s part of Insider Preview builds starting today for both your Windows 10 PC and Xbox One. After updating to this build, you can start Beam broadcasts by pulling up the Game bar — Windows + G.

Broadvasting to Beam in Windows 10

  • New Games section in Settings: A new settings page has been added to Windows 10’s Settings app specifically for gaming: Gaming. This new section will be identifiable with the Xbox logo. We’re also beginning to consolidate some system and user settings for gaming in this unified location, where PC users are accustomed to accessing their settings. Starting today, you’ll find settings for Game bar, GameDVR, Game Mode, and broadcasting and streaming in this new Gaming area. Not all elements of this section will be visible in today’s Windows Insider build, but we’ll continue to develop and deploy Gaming settings over time.

Games page in Settings

  • Game Mode: With Game Mode, it’s our goal to make Windows 10 the best Windows ever for gaming. Our vision is that Game Mode optimizes your Windows 10 PC for an improvement in game performance. To enable Game Mode, go to Settings > Gaming > Game Mode toggle the feature on. Doing this will give you the ability to enable the feature for each UWP and Win32 title you play by pulling up the Game bar (Windows + G) and the clicking the Settings button. There you’ll be able to opt individual games into using Game Mode. (See below for an known issue regarding Game Mode in this build.)

Game Mode in Windows 10

We look forward to your feedback! We’ll test and refine these features between now and release and continue to evolve the Windows 10 gaming experience.

Windows Game bar improved full-screen support: We are continually adding more titles with this support. In this build, we’ve added support for 17 additional games in full-screen mode with Windows Game bar. As always, just hit WIN + G to invoke Game bar to capture a recording or screenshot.

  • Battlefield 3
  • Call of Duty: Black Ops 2
  • Call of Duty: Black Ops 2 – Zombies
  • FIFA 14
  • FIFA 17
  • FIFA Manager 14
  • Grim Dawn
  • Guild Wars 2
  • Left 4 Dead 2
  • MapleStory
  • Paragon
  • Payday 2
  • Rocket League
  • The Elder Scrolls Online
  • The Sims 4
  • Tom Clancy’s Rainbow Six Siege
  • Warface

Tip: You can control this feature through the Windows Game bar settings. In the settings dialog, look for the “Show Game bar when I play full-screen games” checkbox. See Major Nelson’s post on Game bar for more info on how to adjust settings for best game performance.

Microsoft Edge can now read aloud: Last week many of you asked about this and we are proud to announce that Microsoft Edge will now read aloud* your e-books! Just press the “read aloud” button at the top-right corner after opening one of your e-books and listen to Microsoft Edge read you the book with focus on the line and the word being read along. This feature is also extended to all non-store EPUB files opened using Microsoft Edge.

Read aloud in Microsoft Edge.

*Supported languages: ar-EG, ca-ES, da-DK, de-DE, en-AU, en-CA, en-GB, en-IN, en-US, es-ES, es-MX, fi-FI, fr-CA, fr-FR, it-IT, ja-JP, nb-NO, nl-BE, nl-NL, pt-BR, pt-PT, sv-SE, tr-TR, zh-CN.

Microsoft Edge and emoji: Microsoft Edge will now display full-color, updated emoji by default on websites that use emoji.

New emoji support in Microsoft Edge.

Continuing our progress towards a more inclusive OOBE: You all gave us a TON of feedback on the new first experience over the past few weeks. The Windows Out-Of-Box-Experience (OOBE) Team offers their thanks and continues their work to reimagine how people set up their PCs for the first time. Building off what was first introduced in Build 15002, this build brings even more improvements to make setting up a PC more inclusive!

  • Privacy: The new privacy settings in the set up experience (OOBE) that Terry talked about in this blog post are now included in this build.

Privacy in OOBE.

  • Wi-Fi Captive Portal: The Wi-Fi connectivity experience in OOBE has been updated to support “captive portal” Wi-Fi networks.  When connecting to such a Wi-Fi hotspot, OOBE will navigate to a lightweight browser experience allowing you to complete the connection and reach the internet.  We’ve also included some updates allowing you to configure some basic properties for the Wi-Fi network during OOBE.
  • MSA Sign-in/Sign-up: The Microsoft Account (MSA) Sign-in and Sign-up flows in OOBE are now updated to the new design for the Windows 10 Creators Update. Users will see that these experiences are paginated and simplified will help in reducing the overall cognitive load and in improving accessibility in the MSA sign-in/sign-up experiences.

MSA in OOBE.

  • Windows Hello enrollment: Users can now enroll into Windows Hello using the new design implementation for the Windows 10 Creators Update with Cortana voiceover and support for speech input.
  • Updated voice: The audio track in this build is recorded by voice actors, so it is much friendlier and the intonations are better too (compared to the synthetic voice track we had in Build 15002).
  • Subtitles: The primary purpose behind this redesign of OOBE is to be inclusive and improve accessibility. This build supports subtitles to ensure that our deaf/hard of hearing users are also included in the new Cortana voiced OOBE.
  • Bug fixes and visual polish: This build also has improvements for visual polish and a good number of bug fixes which help in overall stability.

Insiders will still see that some pages (e.g. Enterprise flows and Ownership Disambiguation page) are in the design see in the Windows 10 Anniversary Update. This is a temporary state while we are working to convert these pages also into the new format.

Blue light is now night light: To make it more accurately reflect what the feature does, we’ve renamed it “night light”. We made some improvements to the night light feature that lets you preview the setting before applying it.

Blue light is now night light.

We also made some improvements in the range of color temperatures in the night light feature and fixed a few issues including:

  • We fixed an issue where right-clicking the night light quick action from the Action Center and selecting Settings brought up the Settings home page and not the night light specific setting.
  • We fixed an issue where waking your device from sleep or connecting a new monitor would not have the night light setting applied correctly.
  • We fixed an issue where explorer might hang after waking a device if night light was enabled.

Night light still in the same place, so if you haven’t tried it out yet – you can do so via Settings > System > Display.

Resize your Virtual Machine Connection in Hyper-V: You can now quickly resize VMConnect by dragging the corners of the window and the guest operating system will automatically adjust to the new resolution. This requires that you are logged into the guest operating system and running in Enhanced session mode.

Store app and game download progress in Action Center: Building off of the work we shared with Build 15007, newly downloaded apps and games from the Store will now show download progress inside of the Action Center! Perfect for checking the status of a large game download while doing something else.

Store download progress in Action Center.

Improved discovery for Troubleshooters: Troubleshooters can find and fix many common problems for you. With Build 15019, we bring the latest piece of our ongoing effort to converge Control Panel into Settings and are happy to let you know that the Troubleshooters section of Control Panel has been migrated into Settings. We also flattened the hierarchy to make them easier to find, and added more solutions, too!  Head to Settings > Update & security > Troubleshoot to see the complete list. 

Troubleshoot settings page.

Improved high-DPI support for ITPros: With Build 15002, we shared our new option to override a GDI-based app’s high DPI scaling with our own System (Enhanced) scaling. With Build 15019, we’re happy to let you know that this System (Enhanced) application compatibility setting will now also available to be enabled or disabled via the Windows ADK for IT Professionals, so you can make adjustments to a broad audience of PCs.

Other changes, improvements, and fixes for PC

  • We fixed an issue where connecting an Xbox 360 or Xbox One Controller to your PC would cause the DWM to crash, resulting in your display flickering and/or appearing blank or black.
  • We fixed an issue where with certain games if you used Alt + Tab to change focus to a different window, you could see both the newly focused window and the game flicker.
  • We fixed an issue where when using Microsoft Edge with Narrator, you might hear “no item in view” or silence while tabbing or using other navigation commands.
  • We fixed an issue where pasting on top of selected text in a Web Note would result in Microsoft Edge crashing.
  • We fixed an issue that prevented some users from viewing Twitch.tv streams in Microsoft Edge.
  • We fixed an issue from recent builds where Microsoft Edge would crash when sharing a PDF.
  • We’ve updated the e-book viewer in Microsoft Edge so that if you’ve clicked on an image, you can now Ctrl + Mouse wheel to zoom.
  • We fixed an issue where typing [ into the F12 Developer Tools window wouldn’t work when using the Hungarian keyboard.
  • Custom scaling has been migrated from Control Panel to now be a subpage in Display Settings.
  • We fixed an issue where Taskbar preview icons were unexpectedly small on high-DPI devices.
  • To help save characters when typing in fields with a character limit, we’ve added a new ellipsis child key for Latin-based languages (such as English, German, and French) when you press and hold the period key on the touch keyboard.
  • We fixed an issue where in certain UWP apps, tapping outside of a text box currently with focus while in tablet mode wouldn’t dismiss the touch keyboard.
  • We fixed a typo in the new compatibility option to override high DPI scaling behavior for GDI-based apps.
  • We fixed an issue where newly pinned secondary tiles (for example, a pinned page from Settings) would unexpectedly appear in Start’s Recently Added list.
  • We’ve polished the animation when moving tiles in and out of folders on Start, and fixed an issue where it wasn’t possible to drag the final tile out of a folder onto the same row as the folder tile.
  • We fixed an issue from Build 15014 where using Hey Cortana might result in SpeechRuntime.exe using an unexpected amount of CPU.
  • We fixed an issue where, with a maximized Notepad window and enough text to require a scrollbar, the right-most edge of the scrollbar wouldn’t do anything when dragged in an attempt to scroll.
  • We fixed an issue where, after pressing Alt to set focus to the menu bar, certain apps could become unresponsive if then pressing Ctrl or clicking inside the app’s child window.
  • We fixed an issue where Cortana might crash when slowly typing out a UNC path that has already been typed out and opened through Cortana once before.
  • We fixed an issue where Default apps Settings would crash of you clicked an app under “Choose default app” and selected the option to look for an app in the Store.
  • We’ve updated Themes settings page to now contain a link to the Store to find more themes to download.
  • We fixed an issue where certain apps might crash after using the Open dialog to rename and open a folder.
  • We fixed an issue where Win + Shift + S wouldn’t work if the mode in Snipping Tool was set to something other than Rectangle.
  • We fixed an issue where you could end up with multiple Snipping Tool processes open after using Win + Shift + S and hitting Esc to stop the snip.
  • We fixed an issue where certain file attributes, such as +s, would be lost when copying or moving a folder to a different partition.
  • We fixed an issue where using Command Prompt with certain fonts could result in conhost.exe unexpectedly using a lot of CPU.
  • We updated Dial Settings to now list customized apps alphabetically.
  • We fixed an issue with Windows Ink where undoing and redoing a point erase could result in the ink reappearing in an unexpected order.
  • We improved Screen Sketch copy reliability.
  • We fixed an issue some insiders may have experienced recently with the mouse and keyboard sometimes going unresponsive for a few seconds at a time.
  • We fixed an issue resulting in certain apps crashing when you switched to Tablet Mode.
  • We fixed an issue where calendar appointments marked as Tentative or Out of Office were showing up as Free in the Taskbar clock and calendar flyout.
  • We fixed an issue where, if multiple folders were selected in Background Settings under Slideshow mode, slideshow would not work.
  • We fixed an issue where you could see the page flash when navigating from and back to Themes Settings.
  • We fixed an issue where the Bluetooth & other devices Settings page unexpectedly said “Systemsettings.Viewmodel.settingentry” at the bottom.
  • We’ve improved Settings reliability.
  • We fixed an issue where, when using Phonetic as the sorting method with the zh-tw display language, the clock on the lock screen wouldn’t appear.
  • Based on feedback, we’ve adjusted the look of the Virtual Touchpad to make the left/right buttons more visible.
  • We’ve fixed an issue that could result in the Netflix app crashing on launch. Try again and it should work. We also fixed an issue where on certain hardware types, the Netflix app would crash when starting a movie.
  • The game DOTA2 should now launch normally.

Known issues for PC

  • IMPORTANT: The download progress indicator shown when downloading this build is currently broken under Settings > Update & security > Windows Update. It may look like you’re getting stuck at 0% or at other percentages. Ignore the indicator and be patient. The build should download fine and the installation should kick off. See this forum post for more details.
  • After updating to this build, nonstop exceptions in the Spectrum.exe service may occur causing PCs to lose audio, disk I/O usage to become very high, and apps like Microsoft Edge to become unresponsive when doing certain actions such as opening Settings. As a workaround to get out of this state, you can delete C:ProgramDataMicrosoftSpectrumPersistedSpatialAnchors and reboot. For more details, see this forum post.
  • Some Windows Insiders may have had trouble connecting to certain Google sites due to an implementation of a new security model being rolled out to further enhance user security. The team is working on a resolution. In the meantime, users can access these sites from an InPrivate tab.
  • Extensions in Microsoft Edge do not work in this build. Extensions may appear to load but will not function as expected. This issue should be fixed in the next Insider release build we release. If you depend on extensions in Microsoft Edge, we recommend skipping this build. You can pause Insider Preview builds by going to Settings > Updates & security > Windows Insider Program, clicking on “Stop Insider Preview builds”, and choosing “Pause updates for a bit”.
  • Microsoft Edge F12 tools may intermittently crash, hang, and fail to accept inputs.
  • Microsoft Edge’s “Inspect Element” and “View Source” options don’t correctly launch to the DOM Explorer and Debugger, respectively.
  • Windows Insiders will unexpectedly see a “Mixed Reality” entry on the main page of Settings.
  • Some captive portal Wi-Fi networks may fail to connect during OOBE. If a captive portal network is using DNS hijacking to redirect to a secure site, the captive portal app will crash and the user cannot clear the portal.
  • Yes/No voice commands in the Wi-Fi portion of OOBE are currently failing.
  • Quicken 2016 will fail to run with an error stating .NET 4.6.1 is not installed. For Insiders familiar with Registry Editor, there is an optional workaround. Take ownership of the following registry keys and edit the “version” value to be 4.6.XXXXX instead of 4.7.XXXXX:

HKEY_LOCAL_MACHINESOFTWAREWOW6432NodeMicrosoftNET Framework SetupNDPv4Client

HKEY_LOCAL_MACHINESOFTWAREWOW6432NodeMicrosoftNET Framework SetupNDPv4Full

Note: Please take caution when editing the registry. Changing the wrong value can have unexpected and undesirable results.

  • Dragging apps from the all apps list to pin on Start’s tile grid won’t work. For now, please right-click on the desired app in order to pin it.
  • Some Tencent apps and games may crash or work incorrectly on this build.
  • Under Settings > Update & security > Windows Update you might see the text “Some Settings are managed by your organization” even though your PC isn’t being managed by an organization. This is a bug caused by an updated flight configuration setting for Insider Preview builds and does not mean your PC is being managed by anyone.
  • On some PCs, audio stops working sporadically with ‘device in use’ error”. We are investigating. Restarting the audio service may fix things for a bit.
  • The Action Center may sometimes appear blank and transparent without color. If you encounter this, try moving the taskbar to a different location on screen.
  • ADDED: For the update to Build 15019, there have been two issues surrounding Windows Update that have arise: 1.) You may see an error such as 0xC1900401, or a note that the build is not yet available for your device 2.) Your PC scans and finds Build 15019, but it appears to hang on “Initializing…” and doesn’t appear to begin downloading the build. See this forum post.

Gaming known issues

  • Popular games may experience crashes or black screens when trying to load due to a platform issue.
  • When clicking on certain elements in desktop (Win32) games, the game minimizes and cannot be restored.
  • Game Mode is enabled system wide by default, however, the ON/OFF toggle in Settings will incorrectly show it as being OFF until the user manually toggles the Setting to ON which will cause it to update and accurately display the status of Game Mode system wide.
  • Broadcasting to Beam via the Game bar currently requires a number of Privacy settings to be changed. Please visit this forum post.
  • Certain hardware configurations may cause the broadcast live review window in the Game bar to flash Green while you are Broadcasting. This does not affect the quality of your broadcast and is only visible to the Broadcaster.

Community Updates

Next week, we are off to the NexTech Africa Conference in Nairobi, Kenya where our team will have presence in the keynote to kick off an East African fellowship along with participating in panels related to Digital Transformation and Engineering at Scale. We are very excited to continue our learning about places with less reliable connectivity and how we can build the best possible products for these markets.

We would also like to share a touching article written by longtime Windows Insider Adam McLellan on how the Insider program gave him a sense of community and belonging. Thank you for sharing!

Keep hustling team,
Dona <3

Read More