Tag Archives: iOS SDK

Announcing Project Rome iOS SDK

Project Rome is a platform for enabling seamless cross-device and cross-platform experiences. The philosophy behind Project Rome is simple. App experiences shouldn’t be tied to a single device any more than data should be tied to a single device. Your apps, like your data, should travel with you.

Previously, this meant switching between devices, while maintaining a single user experience, on a different Windows device. A few months ago, Project Rome features were extended to the Android platform, allowing you to start an app session on your Android phone and continue it on a Windows PC, an Xbox One or even a Surface Hub.

Now, Project Rome support is also being extended to the iOS platform. You can download the Project Rome SDK for iOS here.

Revisiting the Contoso music app

If you have been following the evolution of Project Rome, you’ll be familiar with our developer friend Paul and his example Contoso Music app. Paul was originally introduced in a blog post on Cross-device experiences to help us understand a typical Project Rome scenario.

He expanded his UWP music streaming app to run across multiple Windows devices tied to the same Microsoft Account (MSA). Using Project Rome, Paul changed how his app worked so a user streaming a song on a Windows PC could then transfer that song to his Xbox. Then, as he got ready to go out for a run, he could transfer the current playlist to his Windows Phone.

In the subsequent post, Paul developed an Android version of Contoso Music app and used the Project Rome Android SDK to allow a user to start playing a song on her Android phone and continue playing it on a Windows device when he or she got home. The Contoso Music app was now cross-platform, transferring smoothly from one platform to the next.

Extending to iOS

Let’s imagine that based on the success of his Windows and Android versions, Paul develops an iOS version of Contoso Music. When examining his telemetry after a few months, Paul sees that all his apps are doing well, like his Windows and Android versions. However, there is a common theme in the user feedback; users are finding it difficult handling device switching. So, Paul wants to enable a scenario in which a user can listen to music on the iPhone over headphones, then enter the living room and immediately switch to playing the same music over his Xbox, connected to quality speakers.

With the Project Rome iOS SDK, Paul can create a bridge between iOS devices and Windows devices in two stages:

  • The RemoteSystems API allows the app to discover Windows devices the user owns. The RemoteSystems API will allow the Contoso Music app to discover these devices on the same network or through the cloud.
  • Once discovered, the RemoteLauncher API will launch the Contoso Music app on another Windows device.

How Paul gets it done

In order for Paul’s user to switch from playing music on an iOS device to a Windows device, his app must find out about the other device. This action requires using MSA OAuth to get permission to query for devices and then attempting to discover additional devices, as shown in the diagram below.

// Asynchronously initialize the Rome Platform.
  // Pass in self as class implements the CDOAuthCodeProviderDelegate protocol.
  [CDPlatform startWithOAuthCodeProviderDelegate:self completion:^(NSError* clientError) {
                                              if (clientError)
                                              {
                                                  // Handle error
                                                  return;
                                              }

                                              // Handle success, show discovery screen
                                      }];

// Implementation of CDOAuthCodeProviderDelegate
// The Rome SDK calls this delegate method when it needs an OAuth Access Code from the application.
- (NSError*)getAccessCode:(NSString*)signinUrl completion: (void (^)(NSError* error, NSString* accessCode))completion {

// Stash away the callback the SDK gives us
_getTokenCallback = completion;

  // Show the interactive OAuth Web View flow.
  // Once the OAuth flow completes or fails, invoke this callback.
  ...

// Return nil as there was no error
  return nil;
}

Once initialized, Paul’s app can discover all devices in the user’s MSA device graph by initiating discovery using CDRemoteSystemDiscoveryManager. Information about discovered devices are raised through the CDRemoteSystemDiscoveryManagerDelegate protocol. In In our example, we store each discovered device within an NSMutableArray property called discoveredSystems.

// Create instance and pass ‘self’ as the delegate as it implements CDRemoteSystemDiscoveryManagerDelegate.
CDRemoteSystemDiscoveryManager* remoteSystemDiscoveryManager = [[CDRemoteSystemDiscoveryManager alloc] initWithDelegate:self];

// Start discovery.
[remoteSystemDiscoveryManager startDiscovery];

// CDRemoteSystemDiscoveryManagerDelegate implementation
- (void)remoteSystemDiscoveryManager:
            (CDRemoteSystemDiscoveryManager*)discoveryManager
                             didFind:(CDRemoteSystem*)remoteSystem {
  @synchronized(self) {
     [self.discoveredSystems addObject:remoteSystem];
      // Refresh UI based upon updated state in discoveredSystems e.g. populate table
   }
}

- (void)remoteSystemDiscoveryManager:
            (CDRemoteSystemDiscoveryManager*)discoveryManager
                           didUpdate:(CDRemoteSystem*)remoteSystem {
  NSString* id = remoteSystem.id;

// Loop through and update the Remote System instance if previously seen.
  @synchronized(self) {
    for (unsigned i = 0; i < self.discoveredSystems.count; i++) {
      CDRemoteSystem* currentRemoteSystem =
          [self.discoveredSystems objectAtIndex:i];
      NSString* currentId = currentRemoteSystem.id;

      if ([currentId isEqualToString:id]) {
        [self.discoveredSystems replaceObjectAtIndex:i withObject:remoteSystem];
        break;
      }
    }

       // Refresh UI based upon updated state in discoveredSystems e.g. populate table
  }
}

The user can now select the device he wants to transfer music to from the list of devices that have been discovered. From the selected CDRemoteSystem, an instance of CDRemoteSystemConnectionRequest is instantiated as shown in the sequence diagram below. Using CDRemoteLauncher, Paul is then able to remotely launch the app on the selected device while also including necessary additional contextual information, such as the song currently playing.

Here’s how to remote-launch http://www.bing.com to your device:

// Create a connection request using the CDRemoteSystem instance selected by the user
  CDRemoteSystemConnectionRequest* request =
       // Using the RemoteSystem above, [self.discoveredSystems addObject:remoteSystem];
      [[CDRemoteSystemConnectionRequest alloc] initWithRemoteSystem:system];

NSString* url = @”http://www.bing.com”;

  [CDRemoteLauncher
           launchUri:uri
           withRequest:request
           withCompletion:^(CDRemoteLauncherUriStatus status) {
            // Update UI on launch status
            }];

Voila! Paul has easily augmented his app with cross-device support for iOS.

Wrapping up

Project Rome breaks down barriers by changing notions about what an “app” is and focusing on the user no matter where they are working or what device they are using. An app no longer necessarily means something that is tied to a given device, instead it can be something that exists between your devices and is optimized for the right device at the right time. Today, Project Rome works on Windows 10, Android and iOS. Stay tuned to see what comes next.

To learn more about Project Rome, check out the links below.

Project Rome: Driving user engagement across devices, apps and platforms

Overview

Recently there has been a dramatic shift in the way users use their devices. Rather than substituting new devices, users are using additional form factors like phones and tablets, alongside their PCs; many consumers live in a heterogeneous environment (interact with multiple platforms). In the past, form factor drove distinct types of behavior differences among consumers. However, this is no longer the case as multi-device consumers conduct all their activities across all their devices. Consumers also want to be able to use whatever screen is available independent of where the input comes from.

As users purchase more apps and devices, they naturally expect their lives to become better – simpler, more enjoyable, more productive. However, suddenly, these users are faced with some harsh realities where each device has a clear boundary and to communicate these devices require unnatural actions such as sending oneself an email or using USB sticks.

In addition, as users move between their devices, sometimes important “tasks” get lost because of this context switching. Developers suffer due to all this context switching as well since the developers lose the user engagement as their users switch between devices and apps.

For users, using these sets of devices seamlessly and productively across this heterogeneous ecosystem is complex. As a result, users see friction when moving between their devices. Project Rome aims to solve this complexity and friction by furthering the Microsoft vision of mobility of experiences across the user’s devices.

Microsoft’s vision of mobility of experiences is to create fluidity; moving wherever the user might go, enhancing the experience without being in the way. Mobility of experiences spans a broad range of areas: new hardware form factors, intelligent assistance and many more. Within that broad range, Project Rome is intended to help deliver fundamental advances, in a way that accrues value across all the other areas.

What is Project Rome?

Project Rome aims to deliver a more personal OS for the next generation of computing.

Project Rome consists of:

  • A programming model delivered as APIs for Windows, Android, iOS, and Microsoft Graph, enabling client and cloud apps to build experiences using the Project Rome capabilities.
  • A set of infrastructure services in the Microsoft cloud for Windows-based, and cross-platform devices.
  • A device runtime for connecting and integrating Windows-based and cross-platform devices to the Project Rome infrastructure services.

Our vision with Project Rome is to deliver a personal operating system that is not tied to a device or a platform. Imagine that you or your users live in a world where it does not matter what device, platform or form factor you are on, and the task or the project you are working on can happen regardless of the form factor.

Windows connects to Microsoft Graph!

Microsoft Graph exposes multiple APIs from Microsoft cloud services through a single endpoint: https://graph.microsoft.com. Microsoft Graph simplifies queries that would otherwise be more complex. Microsoft Graph is great benefit to developers, since developers can use a single Microsoft Graph endpoint to access Microsoft data rather than having to call different endpoints, and thus having to deal with multiple auth and data formats.

You can use Microsoft Graph to:

  • Access data from multiple Microsoft cloud services, including Azure Active Directory, Exchange Online as part of Office 365, SharePoint, OneDrive, OneNote and Planner.
  • Navigate between entities and relationships.
  • Access intelligence and insights from the Microsoft cloud (for commercial users).

With Windows Fall Creator’s update, through Project Rome, Windows connects to Microsoft Graph by adding new entities to the Microsoft Graph API set: devices and activities. 

Using Microsoft Graph REST endpoints, developers can now access the devices that belong to their users. In addition to Microsoft devices (PCs, Windows Phones, Xbox, IoT, HoloLens, etc), the device graph also exposes Android and iOS devices thus enabling developers to truly break down boundaries between devices.

Device graph

Project Rome exposes two APIs that developers can use to drive engagement between two or more active devices: RemoteSystems and RemoteSessions

RemoteSystems

We have talked about RemoteSystems APIs in previous blog posts. RemoteSystems enables developers to:

  1. Discover and connect to the user’s devices in proximity, or through the cloud
  2. Remotely launch apps on these devices
  3. Send messages to their apps on these devices

Using these capabilities, developers can build apps that can leverage the user’s environment and create rich experiences that transcend a single device. Below are some use cases of how developers could use these APIs:

  • Extend the experience: A developer could extend their app to launch on a bigger screen that may be more suited for the task at hand

  • Augment the experience: A developer could create a companion experience for their app on another of the user’s devices. This can aid in providing another view of functionality in their app

  • Enrich the experience: A developer could add additional controlling abilities to their app. An example of this could be where a developer provides remote control abilities for their main app from a companion device

The RemoteSystems APIs are now available for Windows, Android, iOS and MS Graph!

RemoteSessions

RemoteSystems enables developers to create single-user experiences where developers can tap into the user’s devices and provide experiences that transcend a single device. However, there are use cases that require developers to create experiences that are multi-user.

Starting with the Windows Fall Creators update, we are excited to announce the availability of the RemoteSessions APIs. The RemoteSessions APIs enable developers to create collaborative experiences for multiple users in proximity.

Here is a use case where developers can use these APIs:

  • Multi-user collaboration: Developers could create experiences where multiple users in proximity can start a session together and enable collaboration. Example of these could be where multiple users are editing a photo, video or a piece of music together. Or where multiple users are playing a game together

The RemoteSessions APIs are available in the Windows Fall Creators update. The Android and iOS implementations of these APIs is coming soon.

Below is a reference table of the capabilities enabled through the Project Rome device graph.

RemoteSystems RemoteSessions
Windows
Android

X

iOS

X

Microsoft Graph REST APIs  

X

Activity graph

Starting with the Windows Fall Creators Update, we are releasing the UserActivity APIs to enable developers to drive engagement in their apps, across devices and platforms. A UserActivity is the unit of user engagement in Windows, and consists of three components: a deep-link, visuals and content metadata. When a UserActivity session is created by an application, it will begin to accrue engagement records when users interact with an application.

When an application publishes UserActivity objects, the UserActivity object will show up in some of the new UI surfaces in Windows, for example, Cortana Notifications and Timeline. Developers can specify both rich metadata (to allow activities to be presented in just the right context) and rich visuals (using Adaptive Card markup) in their UserActivity objects.

Applications can publish UserActivity objects via the Windows.ApplicationModel.UserActivity UWP classes, or integrate directly with the Activity Graph via REST APIs as part of MS Graph. Using the MS Graph API allows applications to publish UserActivity object even from other platforms.

New in Windows Fall Creators update

We are excited to announce new capabilities enabled through Project Rome in the Fall Creators update. Some of these will have a dedicated blog post soon that will show you how to use these APIs and capabilities.

Remote Sessions

While the RemoteSystems set of APIs enabled developers to launch and message with devices belonging to the user, RemoteSessions APIs enables developers to create multi-user experiences in proximity. With the RemoteSessions APIs, developers can discover other devices in proximity and enable a collaborative session.

Microsoft Graph REST APIs

We have released the Project Rome APIs for Windows and Android as client SDKs for these platforms. With the Fall Creators update, we are now enabling these cross-device experiences through Microsoft Graph based REST APIs. Thus, developers can use a common endpoint to access the device graph and send commands to these devices remotely. Microsoft Graph based REST APIs are very useful when you would want to access Project Rome capabilities from say, a web page, a service, a headless device or even a browser extension.

With the Fall Creator’s update, we are releasing the following capabilities:

  • Device Discovery
  • Remote Launch
  • Remote App Services (messaging)
  • User Activity

User Activity APIs

During Build 2017, we showcased a few Windows Shell experiences that drive reengagement across apps, devices and platforms. These experiences were Windows Timeline and Cortana Notifications. These experiences enable users to continue the task that they were working on across devices and apps. Developers can plug into these experiences by using the UserActivity APIs. We are releasing the UserActivity APIs in the Windows SDK for the Fall Creators update. We are also releasing these APIs through Microsoft Graph REST APIs.

iOS SDK

We are excited to announce the availability of the Project Rome iOS SDK. With this SDK, we now have support for Remote Launch abilities onto other Windows devices. In this update, we have provided Objective C based projections onto the Project Rome device runtime. Other capabilities like app services based messaging and remote sessions are coming in a future update.

Android SDK

We announced the Project Rome Android SDK in February of this year, and we updated it a few weeks back. Today we are excited to announce another update to the Project Rome Android SDK. In this update, we have added Bluetooth client and RFComm based transport support. What this means for your apps is that you can now discover Windows devices in proximity using Bluetooth. This is in addition to discovery using Wi-Fi or LAN.

Summary

Project Rome breaks down barriers across all devices and creates experiences that are no longer constrained to a single device.

To learn more and browse sample code, including the snippets shown above, please check out the following articles and blog posts: