The word is out, and the industry is taking notice. Azure Cosmos DB is the world’s first globally distributed, multi-model database service with native NoSQL support. Designed for the cloud, Azure Cosmos DB enables you to build planet-scale applications that bring data to where your users are with SLA guarantees low latency, throughput, and 99.99% availability.
The experts at IDG’s InfoWorld recently recognized Azure Cosmos DB in the InfoWorld Technology of the Year Awards, zeroing in on its “innovative approach to the complexities of building and managing distributed systems,” which includes recognition for leveraging the work of Turing Award winner Leslie Lamport to deliver multiple consistency models. Azure Cosmos DB was also recognized for delivering a globally distributed system where users anywhere in the world can see the same version of data, no matter their location.
In addition, InfoWorld complimented the flexibility and variety of use cases with Azure Cosmos DB, from JSON-based document stores to support for MongoDB APIs and a SQL query option for Azure’s Table Storage.
“Do you need a distributed NoSQL database with a choice of APIs and consistency models? That would be Microsoft’s Azure Cosmos DB.”—InfoWorld, Technology of the Year 2018: The best hardware, software, and cloud services
InfoWorld noted that 2017 was “the year when you could pick a database without making huge compromises,” exactly the advantage of the multiple consistency models available in Azure Cosmos DB. With five distinct options, you no longer have to choose between slow, but accurate, or fast, but inaccurate, data.
Learn more in our free e-book, go hands-on with real-time personalization scenarios, get $200 in credit to try Azure Cosmos DB with a free Azure account, or simply try Azure Cosmos DB right now.
Along with Azure Cosmos DB, InfoWorld also honored Microsoft’s Project Olympus in their 2018 awards, calling out the open hardware design from Microsoft for helping the Open Compute Project push forward the development of cloud-scale hardware. Complex workloads are driving datacenters to diversify hardware, and Project Olympus designs are flexible with multiple compute configurations and a new open-source standard available to any manufacturer.
Learn more about Project Olympus deployment on Azure.
Azure ExpressRoute allows enterprise customers to privately and directly connect to Microsoft’s cloud services, providing a more predictable networking experience than traditional internet connections. ExpressRoute is available in 42 peering locations globally and is supported by a large ecosystem of more than 100 connectivity providers. Leading customers use ExpressRoute to connect their on-premises networks to Azure, as a vital part of managing and running their mission critical applications and services.
Cisco to build Azure ExpressRoute practice
As we continue to grow the ExpressRoute experience in Azure, we’ve found our enterprise customers benefit from understanding networking issues that occur in their internal networks with hybrid architectures. These issues can impact their mission-critical workloads running in the cloud.
To help address on-premises issues, which often require deep technical networking expertise, we continue to partner closely with Cisco to provide a better customer networking experience. Working together, we can solve the most challenging networking issues encountered by enterprise customers using Azure ExpressRoute.
Today, Cisco announced an extended partnership with Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute. We are fully committed to working with Cisco and other partners with deep networking experience to build and expand on their networking practices and help accelerate our customers’ journey to Azure.
Cisco Solution Support provides customers with additional centralized options for support and guidance for Azure ExpressRoute, targeting the customers on premises end of the network.
New monitoring options for ExpressRoute
To provide more visibility into ExpressRoute network traffic, Network Performance Monitor (NPM) for ExpressRoute will be generally available in six regions in mid-February, following a successful preview announced at Microsoft Ignite 2017. NPM enables customers to continuously monitor their ExpressRoute circuits and alert on several key networking metrics including availability, latency, and throughput in addition to providing graphical view of the network topology.
NPM for ExpressRoute can easily be configured through the Azure portal to quickly start monitoring your connections.
We will continue to enhance the footprint, features and functionality of NPM of ExpressRoute to provide richer monitoring capabilities for ExpressRoute.
Endpoint monitoring for ExpressRoute enables customers to monitor connectivity not only to PaaS services such as Azure Storage but also SaaS services such as Office 365 over ExpressRoute. Customers can continuously measure and alert on the latency, jitter, packet loss and topology of their circuits from any site to PaaS and SaaS services. A new preview of Endpoint Monitoring for ExpressRoute will be available in mid-February.
Simplifying ExpressRoute peering
To further simplify management and configuration of ExpressRoute we have merged public and Microsoft peerings. Now available on Microsoft peering are Azure PaaS services such as Azure Storage and Azure SQL along with Microsoft SaaS services (Dynamics 365 and Office 365). Access to your Azure Virtual Networking remains on private peering.
Figure 2: ExpressRoute with Microsoft peering and private peering
ExpressRoute, using BGP, provides Microsoft prefixes to your internal network. Route filters allow you to select the specific Office 365 or Dynamics 365 services (prefixes) accessed via ExpressRoute. You can also select Azure services by region (e.g. Azure US West, Azure Europe North, Azure East Asia). Previously this capability was only available on ExpressRoute Premium. We will be enabling Microsoft peering configuration for standard ExpressRoute circuits in mid-February.
New ExpressRoute locations
ExpressRoute is always configured as a redundant pair of virtual connections across two physical routers. This highly available connection enables us to offer an enterprise-grade SLA. We recommend that customers connect to Microsoft in multiple ExpressRoute locations to meet their Business Continuity and Disaster Recovery (BCDR) requirements. Previously this required customers to have ExpressRoute circuits in two different cities. In select locations we will provide a second ExpressRoute site in a city that already has an ExpressRoute site. A second peering location is now available in Singapore. We will add more ExpressRoute locations within existing cities based on customer demand. We’ll announce more sites in the coming months.
It’s no surprise that top cloud providers, Amazon Web Services, Microsoft Azure and Google, continued to dominate technology headlines in 2017. This year, we saw these cloud giants perform the same one-upmanship around tools, services and prices that we have in the past — but this time, with a sharper focus on technologies such as containers and hybrid cloud.
Before you head into 2018, refresh your memory of SearchCloudComputing’s top news from the past year:
Amazon, Microsoft crave more machine learning in the cloud
All the top cloud providers see the importance in machine learning, and Amazon Web Services and Microsoft Azure put their differences aside in October to jointly create Gluon, an open source deep learning interface based on Apache MXNet. This new library is intended to make AI technologies more accessible to developers and help them more easily create machine learning models. In the future, Gluon will work worth Microsoft Cognitive Toolkit.
Meanwhile, Google Cloud Platform offers TensorFlow, another open source library for machine learning. While TensorFlow is a formidable opponent, some developers shy away from it due to its complexities.
The main problem that all providers face in this space is that the public cloud isn’t always the best environment for complex machine learning workloads due to cost, data gravity or a lack of skill. Some data scientists continue to use the public cloud to test, but then run the workloads on premises.
Google hybrid cloud strategy crystallizes with Nutanix deal
While cloud is popular, many workloads are still kept on premises — either due to their design or compliance issues. Top cloud providers continue to seek partnerships to target the hybrid market and ease the gap between data centers and the cloud.
The Amazon and VMware deal tends to be the most common example of this. But in June 2017, Google partnered with Nutanix to fuel its own hybrid efforts. Next year, customers will be able to manage and deploy workloads between the Google public cloud and their own hyper-converged infrastructure from a single interface. This partnership will also extends Google cloud services, such as BigQuery, to Nutanix customers, and enable customers to use Nutanix boxes as edge devices.
Kubernetes on Azure hints at hybrid cloud endgame
One of containers’ main advantages is enhanced portability between cloud platforms — a feature that’s especially attractive to hybrid cloud users. In February 2017, Microsoft unveiled the general availability of Kubernetes on Azure Container Service (AKS, formerly ACS), making it the first public cloud provider to support all the major container orchestration engines: Kubernetes, Mesosphere’s DC/O and Docker Swarm.
The move was one that could especially benefit hybrid cloud users because both Docker Swarm and Kubernetes enable teams to manage containers that run on multiple platforms from a single location. In October, Azure rolled out a new managed Kubernetes service, and rebranded ACS as AKS. AWS countered in November with Amazon Elastic Container Service for Kubernetes, a managed service.
Azure migration takes hostile approach to lure VMware apps
To compete with VMware Cloud on AWS, Microsoft released a similar service for Azure in November 2017 — without VMware support.
Azure Migrate enables enterprises to analyze their on-premises environment, discover dependencies and more easily migrate VMware workloads into the Azure public cloud. A bare-metal subset of the service, VMware virtualization on Azure, is expected to be available in 2018, and enables users to run a VMware stack on top of Azure hardware. While the service is based on a partnership with unnamed VMware partners, and involves VMware-certified hardware, the development of it didn’t directly involve VMware itself, and cuts the vendor out of potential revenues. VMware has since said that it will not recommend or support the product.
Cloud pricing models reignite IaaS provider feud
The price war continued in 2017, but top cloud providers changed their tune: instead of direct cuts, they altered their pricing models. AWS abandoned its per-hour billing, in favor of per-second billing, to counter per-minute billing from Google and Azure. Google shortly responded with its own shift to a per-second billing model.
Microsoft, for its part, added a Reserved VM Instances option to Azure, which provides discounts to customers that purchase compute capacity in advance for a one- or three-year period. The move was a most direct shot at AWS’ Elastic Compute Cloud Reserved Instances, which follow a similar model.
Microsoft Azure already solidified its position as the second most popular public cloud, and critical additions in 2017 brought the Azure feature set closer to parity with AWS.
In some cases, Azure leapfrogged its competition. But a bevy of similar products bolstered the platform as a viable alternative to Amazon Web Services (AWS). Some Microsoft initiatives broadened the company’s database portfolio. Others lowered the barrier to entry for Azure, and pushed further into IoT and AI. And the long-awaited, on-premises machine, Azure Stack, seeks to tap surging interest to make private data centers obsolete.
Like all the major public cloud providers, Microsoft Azure doubled down on next-generation applications that rely on serverless computing and machine learning. Among the new products are Machine Learning Workbench, intended to improve productivity in developing and deploying AI applications, and Azure Event Grid, which helps route and filter events built in serverless architectures. Some important upgrades to Azure IoT Suite included managed services for analytics on data collected through connected devices, and Azure IoT Edge, which extends Azure functionality to connected devices.
Many of those Azure features are too advanced for most corporations that lack a team of data scientists. However, companies have begun to explore other services that rely on these underlying technologies in areas such as vision, language and speech recognition.
AvePoint, an independent software vendor in Jersey City, N.J., took note of the continued investment by Microsoft this past year in its Azure Cognitive Services, a turnkey set of tools to get better results from its applications.
“If you talk about business value that’s going to drive people to use the platform, it’s hard to find a more business-related need than helping people do things smartly,” said John Peluso, Microsoft regional director at AvePoint.
Microsoft also joined forces with AWS on Gluon, an open source, deep learning interface intended to simplify the use of machine learning models for developers. And the company added new machine types that incorporate GPUs for AI modeling.
Azure compute and storage get some love, too
Microsoft’s focus wasn’t solely on higher-level Azure services. In fact, the areas in which it caught up the most with AWS were in its core compute and storage capabilities.
The B-Series are the cheapest machines available on Azure and are designed for workloads that don’t always need great CPU performance, such as test and development or web servers. But more importantly, they provide an on-ramp to the platform for those who want to sample Azure services.
Another Azure feature addition was the M-Series machines that can support SAP workloads with up to 20 TBs of memory, a new bare-metal VM and the incorporation of Kubernetes into Azure’s container service.
“I don’t think anybody believes they are on par [with AWS] today, but they have momentum at scale and that’s important,” said Deepak Mohan, an analyst at IDC.
In storage, Managed Disks is a new Azure feature that handles storage resource provisioning as applications scale. Archive Storage provides a cheap option to house data as an alternative to Amazon Glacier, as well as a standard access model to manage data across all the storage tiers.
Reserved VM Instances emulate AWS’ popular Reserved Instances to provide significant cost-savings for advanced purchases and deeper discounts for customers that link the machines to their Windows Server licenses. Azure also added low-priority VMs– the equivalent to AWS Spot Instances — that can provide even further savings but should be limited to batch-type projects due to the fact that they can be pre-empted.
Jason McKaysenior vice president and CTO, Logicworks
The addition of Azure Availability Zones was a crucial update for mission-critical workloads that need high availability. It brings greater fault tolerance to the platform through the ability to spread workloads across regions and achieve a guaranteed 99.99% uptime.
“It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS,” said Jason McKay, senior vice president and CTO at Logicworks, a cloud managed service provider in New York.
And that’s not a bad thing, because Microsoft has always been good at being a fast follower, McKay said. There’s a fair amount of parity in the service catalogs for Azure and AWS, though Azure’s design philosophy is a bit more tightly coupled between its services. That means potentially slightly less creativity, but more functionality out of the box compared to AWS, McKay said.
Databases and private data centers
Azure Database Migration Service has helped customers transition from their private data centers to Azure. Microsoft also added full compatibility between SQL Server and the fully managed Azure SQL database service.
Azure Cosmos DB, a fully managed NoSQL cloud database, may not see a wave of adoption any time soon, but has the potential to be an exciting new technology to manage databases on a global scale. And in Microsoft’s continued evolution to embrace open source technologies, the company added MySQL and PostgreSQL support to the Azure database lineup as well.
The company also improved management and monitoring, which incorporates tools from Microsoft’s acquisition of Cloudyn, as well as added security. Azure confidential computing encrypts data while in use, in addition to encryption options at rest and in transit, while Azure Policy added new governance capabilities to enforce corporate rules at scale.
Other important security upgrades include Azure App Service Isolated, which made it easier to install dedicated virtual networks in the platform-as-a-service layer. The Azure DDoS Protection service aims to protect against DDoS attacks, new capabilities put firewalls around data in Azure Storage, and end points within the Azure virtual network limit the exposure of data to the public internet to access various multi-tenant Azure services.
Azure Stack’s late arrival
Perhaps Microsoft’s biggest cloud product isn’t part of its public cloud. After two years of fanfare, Azure Stack finally went on sale in late 2017. It transfers many of the tools found on the Azure public cloud within private facilities, for customers that have higher regulatory demands or simply aren’t ready to vacate their data center.
“That’s a huge area of differentiation for Microsoft,” Mohan said. “Everybody wants true compatibility between services on premises and services in the cloud.”
Rather than build products that live on premises, AWS joined with VMware to build a bridge for customers that want their full VMware stack on AWS either for disaster recovery or extension of their data centers. Which approach will succeed depends on how protracted the shift to public cloud becomes — and a longer delay in that shift favors Azure Stack, Mohan said.
Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at email@example.com.
The preview of Microsoft Azure in France is open today to all customers, partners and ISVs worldwide giving them the opportunity to deploy services and test workloads in these latest Azure regions. This is an important step towards offering the Azure cloud platform from our datacenters in France.
The new Azure Regions in France are part of our global portfolio of 42 regions announced, which offer the scale needed to bring applications closer to users and customers around the world. We continue to prioritize geographic expansion of Azure to enable higher performance and availability, meet local regulatory requirements, and support customer preferences regarding data location. The new regions will offer the same enterprise-grade reliability and performance as our globally available services combined with data residency to support the digital transformation of businesses and organizations in France.
The new France Central region offers Azure Availability Zones which provide comprehensive native business continuity solutions and the highest availability in the industry with a 99.99% virtual machines uptime SLA when generally available. Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking for higher availability, increased resiliency and business continuity. Starting with preview, customers can architect highly available applications and increase their resiliency to datacenter level failures by deploying IaaS resources across Availability Zones in France Central. Availability Zones in France Central can be paired with the geographically separated France South region for regional disaster recovery while maintaining data residency requirements.
You can follow these links to sign up for the Azure Preview in France, learn more about the Microsoft Cloud in France, or learn more about Azure Availability Zones.
The Azure IoT Hub Device Provisioning Service is now available with the same great support you’ve come to know and expect from Azure IoT services. The Device Provisioning Service enables customers to configure zero-touch device provisioning to Azure IoT Hub, and it brings the scalability of the cloud to what was once a laborious one-at-a-time process. The Device Provisioning Process was designed with the challenges of the supply chain in mind, providing the infrastructure needed to provision millions of devices in a secure and scalable manner.
With general availability support comes expanded protocol support. Automatic device provisioning with the Device Provisioning Service now supports all protocols that IoT Hub supports including HTTP, AMQP, MQTT, AMQP over websockets, and MQTT over websockets. This release also corresponds to expanded SDK language support for both the device and client side. We now support SDKs in the following languages including C, C#, Java, Node (service for now, device coming soon), and Python (device for now, service coming soon). Get started with the Device Provisioning Service with the quick start tutorials.
The Device Provisioning Service works in a wide variety of scenarios:
Zero-touch provisioning to a single IoT solution without requiring hardcoded IoT Hub connection information in the factory (initial setup).
Automatically configuring devices based on solution-specific needs.
Load balancing devices across multiple hubs.
Connecting devices to their owner’s IoT solution based on sales transaction data (multitenancy).
Connecting devices to a specific IoT solution depending on use-case (solution isolation).
Connecting a device to the IoT hub with the nearest geo-location.
Re-provisioning based on a change in the device, such as a change in ownership or location.
The Device Provisioning Service is flexible enough to support all those scenarios using the same basic flow:
We’ve made it easier than ever to use hardware-based security with the Device Provisioning Service device SDKs. We offer in-box support for different kinds of hardware security modules (HSMs), and we have partnerships with several hardware manufacturers to help our customers be as secure as possible. You can learn more about the hardware partnerships by reading the blog post Provisioning for true zero-touch secure identity management for IoT, and you can learn more about HSMs by reading the blog post Azure IoT supports new security hardware to strengthen IoT security. The SDKs are extensible to support other HSMs, and you can learn more about how to use your own custom HSM with the device SDKs. While using an HSM is not required to use the Device Provisioning Service, we strongly recommend using one in your devices. The SDKs provide a TPM simulator and a DICE simulator (for X.509 certs) for development and testing purposes. Learn more about all the technical concepts involved in device provisioning.
Azure IoT is committed to offering you services which take the pain out of deploying and managing an IoT solution in a secure, reliable way. To learn more please watch the videos What is the Device Provisioning Service and Provisioning a real device. You can create your own Device Provisioning Service on the Azure portal, and you can check out the device SDKs on GitHub. Learn all about the Device Provisioning Service and how to use it in the documentation center. We would love to get your feedback on secure device registration, so please continue to submit your suggestions through the Azure IoT User Voice forum.
To sum things up with a limerick:
Come join us in our celebration Of IoT auto-registration It’s generally available Full-featured and capable For your devices’ automation
Microsoft is launching a public preview of a location-based Azure cloud service that’s designed to integrate well with Internet of things deployments and asset tracking.
Azure Location Based Services will be powered by TomTom’s Online APIs, but can leverage other location technologies in the future. Azure LBS will use the same billion, account and APIs as other Azure services.
Microsoft’s aim is to give cloud developers geospatial data that can be integrated with smart city and Internet of things deployments. Target industries include manufacturing, automotive, logistics, smart cities and retail. A year ago, Microsoft laid out plans to integrate geographic data with Azure.
Sam George, partner director of Microsoft Azure IoT, said Microsoft LBS is aimed at providing one dashboard to manage services and templates enterprises can use to track assets. “As cloud and IoT transform businesses, geospatial data capabilities are needed for connected devices and assets,” said George. “Many of these assets move and monitoring and viewing them in a location is important. It’s part of a broader IoT digital feedback loop.”
The capabilities in Azure LBS–mapping, search, routing, traffic and time zones–are designed to be used for everything from asset tracking for transportation fleets as well as autonomous driving.
Cubic Telecom, an Irish telecommunications company for the automotive industry, built a proof of concept that uses Azure LBS to visualize existing locations of electric vehicle charging stations. Here’s a look at Cubic Telecom’s charging station finder.
Fathym, an IoT company, is using Azure LBS to visualize road conditions for Alaska’s department of transportation. Fathym’s road and route weather forecasting will be introduced at the LA Auto Show.
Azure LBS can be used as part of a broader suite or as a standalone service. Azure LBS will have consumption based pricing and George noted that enterprise location data is private. For the public preview, Azure Location Based Services will offer a two tiered pricing model – a set of free transactions per account and then 1,000 transactions for $0.50.
Today we announced the Public Preview availability of Azure Location Based Services (LBS). LBS is a portfolio of geospatial service APIs natively integrated into Azure that enable developers, enterprises and ISVs to create location aware apps and IoT, mobility, logistics and asset tracking solutions. The portfolio currently comprises of services for Map Rendering, Routing, Search, Time Zones and Traffic. In partnership with TomTom and in support of our enterprise customers, Microsoft has added native location capabilities to the Azure public cloud.
Azure LBS uses key-based authentication. To get a key, go to the Azure portal and create and Azure LBS account. By creating an Azure LBS account, you automatically generate two Azure LBS keys. Both keys will authenticate requests to the various Azure LBS services. Once you have your account and your keys, you’re ready to start accessing Azure Location Based Services. And, the API model is simple to use. Simply parameterize your URL request to get rich responses from the service:
Sample Address Search Request: atlas.microsoft.com/search/address/json?api-version=1&query=1 Microsoft Way, Redmond, WA
Azure Map Control
The Azure Map Control makes it simple for developers to jumpstart their development. By adding a few lines of code to any HTML document, you get a fully functional map.
Hello Azure LBS
In the above code sample, be sure to replace [AZURE_LBS_KEY] with your actual Azure LBS Key created with your Azure LBS Account in the Azure portal.
The Azure LBS Render Service is use for fetching maps. The Render Service is the basis for maps in Azure LBS and powers the visualizations in the Azure Map Control. Users can request vector-based map tiles to render data and apply styling on the client. The Render Service also provides raster maps if you want to embed a map image into a web page or application. Azure LBS maps have high fidelity geographic information for over 200 regions around the world and is available in 35 languages and two versions of neutral ground truth.
The Azure LBS cartography was designed from the ground up and created with the enterprise customer in mind. There are lower amounts of information at lower levels of delineation (zooming out) and higher fidelity information as you zoom in. The design is meant to inspire enterprise customers to render their data atop of Azure LBS Maps without additional detail bleeding through disrupting the value of customer data.
The Azure LBS Routing Service is used for getting directions, but not just point A to point B directions. The Azure LBS Routing Service has a slew of map data available to the routing engine allowing it to modify the calculated directions based on a variety of scenarios. First, the Routing Service provides customers the standard routing capabilities they would expect with a step-by-step itinerary. The calculation of the route can use the faster, shortest or avoiding highly congested roads or traffic incidents. For traffic-based routing, this comes in two flavors: “historic” which is great for future route planning scenarios when users would like to have a general idea of what traffic tends to look like on a given route; and, “live” which is ideal for active routing scenarios when a user is leaving now and wants to know where traffic exists and the best ways to avoid it.
Azure LBS Routing will allow for commercial vehicle routing providing alternate routes made just for trucks. The commercial vehicle routing supports parameters such as vehicle height, weight, the number of axels and hazardous material contents all to choose the best, safest and recommend roads for transporting their haul. The Routing Service provides a variety of travel modes, including walking, biking, motorcycling, taxiing or van routing.
Customers can also specify up to 50 waypoints along their route if they have pre-determined stops to make. If customers are looking for the best order in which to stop along their route, they can have Azure LBS determine the best order in which to route to multiple stops by passing up to 20 waypoints into the Routing Service where an itinerary will be generated for them.
Using the Azure LBS Route Service, customers can also specify arrival times when they need to be at a specific location by a certain time. Using the massive amount of traffic data, almost a decade of probes captured per geometry and high frequency intervals Azure LBS can let customers know given day or the week and time when is the best time of departure. Additionally, Azure LBS can use current traffic conditions to notify customers of a road change that may impact their route and provide updated times and/or alternate routes.
Azure LBS can also take into considering the engine type being used. By default, Azure LBS assumes a combustion engine is being used; however, if an electrical engine is in use Azure LBS will accept input parameters for power settings and generate the most energy efficient route.
The Routing Services also allows for multiple, alternate routes to be generated in a single query. This will save on over the wire transfer. Customers can also specify that they would like to avoid specific route types such as toll roads, freeways, ferries or carpool roads.
The Azure LBS Search Service provides the ability for customers to find real world objects and their respective location. The Search Service provides for three major functions:
Geocoding: Finding addresses, places and landmarks
POI Search: Finding businesses based on a location
Reverse Geocoding: Finding addresses or cross streets based on a location
With the Search Service, customers can find addresses and places from around the world. Azure LBS supports address level geocoding in 38 regions, cascading to house numbers, street-level and city level geocoding for other regions of the world. Customers can pass addresses into the service based in a structured address form; or, they can use an unstructured form when they want to allow for their customers to search for addresses, places or business in a single query. Users can restrict their searches by region or bounding box and can query for a specific coordinate to influence the search results to improve quality. Reverse the query to provide a coordinate, say from a GPS receiver, customers can get the nearest address or cross street returned from the service.
The Azure LBS Search Service also allows customers to query for business listings. The Search Service contains hundreds of categories and hundreds of sub-categories for finding businesses or points of interest around a specific point or within a bounding area. Customers can query for businesses based on brand name or general category and filter those results based on location, bounding box or region.
Sample POI Search Request (Key Required): atlas.microsoft.com/search/poi/category/json?api-version=1&query=electric%20vehicle%20station&countrySet=FRA
Time Zone Service
The Azure LBS Time Zone Service is a first of it’s kind providing the ability to query time zones and time for locations around the world. Customers can now submit a location to Azure LBS and receive the respective time zone, the respective time in that time zone and the offset to Coordinated Universal Time (UTC). The Time Zone Service provides access to historical and future time zone information including changes for daylight savings. Additionally, customers can query for a list of all the time zones and the current version of the data – allowing customers to optimize their queries and downloads. For IoT customers, the Azure LBS Time Zone Service allows for POSIX output, so users can download information to their respective devices that only infrequently access the internet. Additionally, for Microsoft Windows users, Azure LBS can transform Windows time zone IDs to IANA time zone IDs.
Sample Time Zone Request (Key Required): atlas.microsoft.com/timezone/byCoordinates/json?api-version=1&query=32.533333333333331,-117.01666666666667
The Azure LBS Traffic Service provides our customers with the ability to overlay and query traffic flow and incident information. In partnership with TomTom, Azure LBS will have access to a best in class traffic product with coverage in 55 regions around the world. The Traffic Service provides the ability to natively overlay traffic information atop of the Azure Map Control for a quick and easy means of viewing traffic issues. Additionally, customers have access to traffic incident information – real time issues happening on the road and collected through probe information on the roads. The traffic incident information provides additional detail such as the type of incident and the exact location. The Traffic Service will also provide our customers with details of incidents and flow such as the distance and time from one’s current position to the “back of the line;” and, once a user is in the traffic congestion the distance and time until they’re out of it.