Tag Archives: Services

Monitor Active Directory replication via PowerShell

breaks down, administrators need to know quickly to prevent issues with the services and applications that Active Directory oversees.

It is important to monitor Active Directory replication to ensure the process remains healthy. Larger organizations that use Active Directory typically have several domain controllers that rely on replication to synchronize networked objects — users, security groups, contacts and other information — in the Active Directory database. Changes in the database can be made at any domain controller, which must then be duplicated to the other domain controllers in an Active Directory forest. If the changes are not synchronized to a particular domain controller — or all domain controllers — in an Active Directory site, users in that location might encounter problems.

For example, if an administrator applies a security policy setting via a Group Policy Object to all workstations, all domain controllers in a domain should pick up the GPO changes. If one domain controller in a particular location fails to receive this update, users in that area will not receive the security configuration.

Why does Active Directory replication break?

Active Directory replication can fail for several reasons. If network ports between the domain controllers are not open or if the connection object is missing from a domain controller, then the synchronization process generally stops working.

Since domain controllers rely on the domain name system, if their service records are missing, the domain controllers will not communicate with each other, which causes a replication failure.

Check Active Directory replication status manually

There are many ways to check the Active Directory replication status manually.

Administrators can run the following string using the command-line repadmin utility to show the replication errors in the Active Directory forest:
repadmin /replsum /bysrc /bydest /errorsonly

Administrators can also use the Get-ADReplicationPartnerMetadata PowerShell cmdlet to check the replication status, which is used in the script further in this article.

Use a script to check replication health

While larger organizations might have an enterprise tool, such as System Center Operations Manager, to monitor Active Directory, a PowerShell script can be a helpful supplement to alert administrators on the replication status. Because so much of a business relies on a properly functioning Active Directory system, it can’t hurt to implement this script and have it run every day via a scheduled task. If the script finds an error, it will send an alert via email.

The system must meet a few requirements before executing the script:

  • It runs on a computer that reaches all domain controllers.
  • It is recommended to use a computer that runs Windows Server 2012 R2 or a Windows 10 computer joined to a domain in the Active Directory forest.
  • The computer has the Active Directory PowerShell modules installed.

How does the script work?

The PowerShell script uses the Get-ADReplicationPartnerMetadata cmdlet, which connects to a primary domain controller emulator in the Active Directory forest and then collects the replication metadata for each domain controller.

The script checks the value of the LastReplicationResult attribute for each domain controller entry. If the value of LastReplicationResult is zero for any domain controller, the script considers this a replication failure. If this error is found, the script executes the Send-MailMessage cmdlet to send the email with a copy of the report file in a CSV file. The script stores the replication report in C:TempReplStatus.CSV.

The settings in the script should be modified to use the email address to send the message along with the subject line and message body.

PowerShell script to check replication status

The following PowerShell script helps admins monitor Active Directory for these replication errors and delivers the findings via email. Be sure to modify the email settings in the script.

$ResultFile = “C:TempReplStatus.CSV”

$ADForestName = “TechTarget.com”

$GetPDCNow =Get-ADForest $ADForestName | Select-Object -ExpandProperty RootDomain | Get-ADDomain | Select-Object -Property PDCEmulator

$GetPDCNowServer = $GetPDCNow.PDCEmulator

$FinalStatus=”Ok”

 

Get-ADReplicationPartnerMetadata -Target * -Partition * -EnumerationServer $GetPDCNowServer -Filter {(LastReplicationResult -ne “0”)} | Select-Object LastReplicationAttempt, LastReplicationResult, LastReplicationSuccess, Partition, Partner, Server | Export-CSV “$ResultFile” -NoType -Append -ErrorAction SilentlyContinue

 

$TotNow = GC $ResultFile

$TotCountNow = $TotNow.Count

IF ($TotCountNow -ge 2)

{

    $AnyOneOk = “Yes”

    $RCSV = Import-CSV $TestCSVFile

    ForEach ($AllItems in $RCSV)

    {

        IF ($AllItems.LastReplicationResult -eq “0”)

        {

            $FinalStatus=”Ok”

            $TestStatus=”Passed”

            $SumVal=””

            $TestText=”Active Directory replication is working.”

        }

        else

        {

            $AnyGap = “Yes”

            $SumVal = “”

            $TestStatus = “Critical”

            $TestText=”Replication errors occurred. Active Directory domain controllers are causing replication errors.”

            $FinalStatus=”NOTOK”           

            break

        }

    }

}

$TestText

 

IF ($FinalStatus -eq “NOTOK”)

{

    ## Since some replication errors were reported, start email procedure here…

 

### START – Modify Email parameters here

$message = @”                                

Active Directory Replication Status

 

Active Directory Forest: $ADForestName

                                  

Thank you,

PowerShell Script

“@

 

$SMTPPasswordNow = “PasswordHere”

$ThisUserName = “UserName”

$MyClearTextPassword = $SMTPPasswordNow

$SecurePassword = Convertto-SecureString –String $MyClearTextPassword –AsPlainText –force

$ToEmailNow =”EmailAddressHere”

$EmailSubject = “SubjectHere”

$SMTPUseSSLOrNot = “Yes”

$SMTPServerNow = “SMTPServerName”

$SMTPSenderNow = “SMTPSenderName”

$SMTPPortNow = “SMTPPortHere”

 

### END – Modify Email parameters here

 

$AttachmentFile = $ResultFile

 

$creds = new-object -typename System.Management.Automation.PSCredential -argumentlist “$ThisUserName”, $SecurePassword

Send-MailMessage -Credential $Creds -smtpServer $SMTPServerNow -from $SMTPSenderNow -Port $SMTPPortNow -to $ToEmailNow -subject $EmailSubject -attachment $AttachmentFile -UseSsl -body $message

}

When the script completes, it generates a file that details the replication errors.

Replication error report
The PowerShell script compiles the Active Directory replication errors in a CSV file and delivers those results via email.

Administrators can run this script automatically through the Task Scheduler. Since the script takes about 10 minutes to run, it might be best to set it to run at a time when it will have the least impact, such as midnight.

Job searches reveal top skills for tech positions

Job hunters who know the ins and outs of Amazon Web Services, Amazon’s cloud computing platform, have pretty bright prospects, according to new research by jobs website Indeed.com. So do those with Microsoft Azure skills.

People who can use the JavaScript library React are golden.

The report, published Wednesday, highlighted 15 IT skills that job seekers are betting will land them high-paying tech positions — and that employers want new hires to come equipped with.

The skills were culled from terms typed into the site’s search browser by job hunters and then matched against what terms employers looked for when combing through Indeed’s resume database.

React, a popular collection of JavaScript resources for building web UIs, was by far the most searched-for term, with a 313% increase in job seeker interest from October 2016 to September 2017 when compared with the same period a year earlier. A lot more employers were using the term to find new employees, too, with a 229% increase.

“React is becoming something that’s more crucial to the job search,” said Indeed economist Daniel Culbertson, who spearheaded the research. The JavaScript library is managed primarily by social media site Facebook, which he said bolsters its popularity. “It’s becoming a part of the job for more and more companies.”

A distant No. 2 was the term Amazon Web Services — the top-selling cloud infrastructure service — showing a 98% increase in interest among job seekers and a 40% jump among employers. No. 3 was Microsoft’s cloud service, Azure, with the number of job hunters using the term to search for tech positions rising 51% and employers 62%.

Search terms equal IT skills

Despite the gap between No. 1 React and the runners-up, AWS and Azure still grew by “leaps and bounds,” the report read, and the high rankings show the importance of cloud computing in the job market today.

“Cloud is gaining because businesses of all stripes are boosting their use of off-site computing and storage — and that’s making experienced cloud developers a must-have for many employers,” the report continued.

Other search terms on the list were Angular — or properly AngularJS, a Google-managed open source web application framework — Tableau, data visualization software; Spark, a data analytics engine; and programming language Python.

Culbertson based the report on search terms because the names of the numerous platforms and programming languages in technology can be easily classified as skills since “they’re very integral to jobs.” When users visit Indeed, they type in search terms that describe the skills they want to take with them to a new job.  

For the research, Culbertson looked at the activity of people searching for tech positions, then examined the searches that got them to the search results page. Then he whittled down the list of terms and classified them as skills.

“We wanted to see which of these tech skills are becoming more important to the job search, based on job search activity,” Culbertson said. “We thought this could serve as a barometer for how important these skills are becoming in general in the tech industry.”

Indeed doesn’t give out exact numbers of searches on terms or jobs people subsequently click on, Culbertson said. But with the site bringing in 200 million visitors a month, breaking down to millions of searches each day — and with technology becoming a more important part of the labor market — searches for tech positions is “a rather high volume,” he said.

The new language of tech?

One unexpected item in the list of tech job search terms was Mandarin, as in the primary language spoken in China, Culbertson said. There was a 49% increase in job-hunter interest in the use of the term.

“This isn’t necessarily a tech skill, but you could classify a language as a skill,” Culbertson said. “And I think it speaks to the fact that China is the second-biggest economy in the world.”

And it represents the impact China and its citizens, who are studying at U.S. universities in high numbers, are having on the U.S. technology industry, he said. Popular job postings people clicked on after searching on Mandarin as a keyword were product developers, language analysts and customer support specialists, the report read.

But employer interest in Mandarin as a search term was down 39%. It’s too early to determine a reason for the dip, Culbertson said, but it’s worth keeping an eye on in the future.

“My assumption is that this would be employers who are looking for people with Mandarin skills because of the amount of business that they do with China,” he said. “But it’s tough to say what would be behind the decline from this year to last year.”

Botched ERP implementation project leads to National Grid lawsuit

National Grid, an electric and gas utility company, has filed a lawsuit against IT services provider Wipro Ltd., alleging it delivered an ERP implementation project “that was of virtually no value to National Grid.” It said the contractor was paid $140 million for its work.

This lawsuit, filed Dec. 1 in the U.S. District Court in New York, described a series of problems with an SAP deployment. 

For instance, National Grid alleged the “new SAP system miscalculated time, pay rates and reimbursements, so that employees were paid too little, too much or nothing at all.” 

With respect to the supply chain functions, the ERP implementation project “devastated” the utility’s procurement, inventory and vendor payment processes. Two months after going live, “National Grid’s backlog of unpaid supplier invoices exceeded 15,000, and its inventory record keeping was in shambles.”

Wipro, a global IT services provider based in India, with about $8.5 billion in revenue and nearly 170,000 employees, quickly refuted the lawsuit’s allegations in a securities filing.

“National Grid has been a valued customer of Wipro in the U.S. and U.K. for several years,” the firm said in its filing. “Wipro strongly believes that the allegations misstate facts and the claims are baseless. Wipro will vigorously contest the allegation in court.”

Wipro said the ERP implementation project began in 2009 and had multiple vendors. The provider said it joined the project in 2010, and “the post go-live process was completed in 2014.”

“During the course of this ERP implementation project, National Grid gave Wipro many positive evaluations. Wipro also received an award from National Grid U.S. with respect to this project in 2014,” the firm said in its statement. 

It is not unusual to see a large ERP project end up in court. Earlier this year, MillerCoors filed a lawsuit against IT services firm HCL Technologies, an India-based IT services firm, over problems relating to a $100 million ERP implementation.

MillerCoors, in court papers, accused HCL of failing to provide leadership and to adequately staff the project. In its counterclaim, HCL said MillerCoors’ leadership team “did not understand the operations of their own business.”

National Grid is a multinational firm that provides utility services in the U.K. and in Massachusetts, New York and Rhode Island. The ERP deployment project began with the goal of upgrading back-office systems that run financials, HR, supply chain and procurement.

National Grid alleged that Wipro designed an “overly complex” SAP project.

“Rather than taking advantage of certain design and configuration options available within the out-of-the-box SAP software to minimize system complexity and reduce risk, Wipro’s inexperienced consultants engaged in excessive customization of the base SAP system,” according to the lawsuit.

The lawsuit claimed by September 2013, the continuing efforts to stabilize the new SAP system were costing approximately $30 million per month, totaling over $300 million.

National Grid did not respond by press time to a request for comment about the current usefulness of its SAP system.

AWS graph database Neptune sets sail at re:Invent 2017

Cloud computing leader Amazon Web Services’ re:Invent conference this week in Las Vegas saw a deluge of cloud and database announcements. Among those on the data side was Neptune, the company’s formal entry into the growing field of graph databases.

While this AWS graph database may have less immediate impact than Redshift, the influential cloud data warehouse it rolled out at re:Invent five years ago, it does fill a gap that competitors like IBM, Microsoft and others have included in their cloud data portfolios as they play catch-up with Amazon in the cloud.

AWS CEO Andy Jassy told the re:Invent audience that the Neptune graph database is intended to uncover connections in data points in a way that eludes traditional relational database. With graphs, data is stored in sets of interconnected nodes, unlike relational databases that store data in rows and columns.

Graph databases have found increasing use in online recommendation engines, as well as tasks including uncovering fraud and managing social media connections. Facebook’s Friends and Search graphs may be among the most vivid examples of use of the technology.

Jassy said graph databases, along with NoSQL key-value and document data stores, are part of a trend toward multimodel databases that support a variety of data processing methods, particularly in the cloud.

Picture of AWS CEO Andy Jassy at re:Invent 2017Andy Jassy

He said Amazon Neptune, which for now is available only as a limited preview, supports graphs based on property and semantic models — these being the two main schools of graph database construction. AWS will offer Neptune as a managed cloud service, with automatic backup to S3 over three cloud availability zones.

“People have used relational databases for everything,” he said. But such single-minded reliance on relational databases is breaking down, he contended.

This AWS graph database isn’t the company’s first foray into the technology: AWS already offers the ability to store graphs from the open source Titan graph database and its JanusGraph fork in DynamoDB tables via a back-end storage plug-in. DynamoDB is an Amazon NoSQL database for which the company claims more than 100,000 users.

Graph adept and less Graph adept

The graph data technology that has emerged in recent years comes primarily from smaller players such as Cambridge Semantics, DataStax, Franz and Neo Technologies Inc. By and large, these companies have welcomed the AWS graph database into their market, as they could signify validation of their technology niche.

People have used relational databases for everything.
Andy JassyCEO, AWS

Established relational leaders have come to include some graph support within their flagship SQL databases, and some even have rolled out stand-alone NoSQL graph databases.

AWS’ target with Neptune is the relational leaders’ flagships, which may struggle when processing ever bigger amounts of graph data, according to Doug Henschen, an analyst at Constellation Research.

Picture of Doug Henschen, Constellation ResearchDoug Henschen

“Oracle, Microsoft SQL Server and IBM DB2 have all added features for graph analysis, but SQL and extended SQL functions are not as adept as graph databases and graph query languages at exploring billions of relationships,” he said.

The AWS graph database correctly identifies an opportunity for replacing graph analysis use cases currently running on less-graph-adept commercial relational databases, Henschen said.

To Neptune, and beyond

Neptune was just one of many updates fleet Amazon added to its fast-moving cloud operation. At re:Invent, Jassy described a serverless version of Amazon Aurora database, which is now in controlled preview. It can be quickly spun up and down, and customers can pay by the second for database capacity when the database is in use, he said.

Meanwhile, Amazon’s DynamoDB is adding global table replication that ensures dependable low latency for data access across many cloud regions. Interest in such capabilities has grown along with the expansion of e-commerce across the globe.

Global replication for cloud databases was among traits heralded by Microsoft in its recent debut of Cosmos DB, as well as Oracle, in its fanfare for its upcoming Oracle 18 cloud database services.

AWS SageMaker brings machine learning to developers

LAS VEGAS — Amazon Web Services released a tool this week to empower developers to build smarter, artificial intelligence-driven applications like the AI experts.

Among the deluge of technologies introduced here at AWS re:Invent 2017, the company’s annual customer and partner event, is a tool called SageMaker. Its function is to help developers add machine learning services to applications.

Machine learning is an artificial intelligence technology that enables applications to learn without being explicitly programmed, and become smarter based on the frequency and volume of new data they ingest and analyze. Few developers are experts in machine learning, however.

SageMaker is geared to that audience. It’s a fully managed service for developers and data scientists who wish to build, train and manage their own machine learning models. Developers can choose among ten of the most common deep learning algorithms, specify their data source, and the tool installs and configures the underlying drivers and frameworks. It natively integrates with machine language frameworks such as TensorFlow and Apache MXNet and will support other frameworks as well.

Alternatively, developers can specify their own algorithm and framework.

The National Football League said it will use SageMaker to extend its next-generation stats initiative to add visualizations, stats and experiences for fans, as well as provide up-to-date information about players on the field, said Michelle McKenna-Doyle, the NFL’s senior vice president and CIO, here this week.

To supplement SageMaker, AWS created DeepLens, a wireless, deep-learning-enabled, programmable video camera for developers to hone their skills with machine learning. One example of DeepLens cited by AWS included recognizing the numbers on a license plate to trigger a home automation system and open a garage door.

AWS’ goal is to democratize access to machine learning technology for developers anywhere, so that individual developers could have access to the same technology as large enterprises, said Swami Sivasubramanian, vice president of machine learning at AWS.

SageMaker is one example of this, said Mark Nunnikhoven, vice president of cloud research at Dallas-based Trend Micro.

“I’ve worked with those technology stacks quite a lot over the last decade and there’s so much complexity …, but now any user doesn’t have to care about it,” he said. “They can do really advanced machine learning very, very easily.”

AWS ups the ante for AI

The general pattern in the market for AI application development has been twofold, said Rob Koplowitz, an analyst at Forrester Research in Cambridge, Mass. There are AI frameworks for data scientists that are extremely flexible but require special skills, and higher-level APIs that are accessible to programmers — and in some cases even non-programmers.

“Amazon wants to provide a middle ground with more flexibility,” Koplowitz said. “It’s an interesting approach and we’re looking forward to getting real work feedback from developers.”

AWS has to play catch-up here with other cloud platform companies that wish to bring machine learning to mainstream programmers. IBM provides developers access to its Watson AI services, and Microsoft has its Cognitive Services and Azure Machine Learning Workbench tools. Reducing the complexity of building machine learning models is among the more difficult areas for businesses, so this is a step in the right direction for AWS, said Judith Hurwitz, founder and CEO at Hurwitz & Associates in Needham, Mass.

Computational intelligence in general, and AI and deep learning in particular, is a hot market with a small community of experts among the biggest tech companies from Facebook to IBM.

“They all have a lot of the same core competencies, but they’re distributing them in different ways,” said Trend Micro’s Nunnikhoven.

Google tends to be more technical, while AWS now wants to make AI more accessible. Microsoft targets specific business analytics uses for AI, IBM wants to show more real-world use cases in areas such as healthcare and financial services, and Apple is looking at AI for privacy and devices. But they’re all contributing back to the same projects, such as Apache Mahout and Spark MLlib, Google’s TensorFlow, Microsoft’s Cognitive Toolkit, and others.

SageMaker should help alleviate developers’ fears that data scientists will make them into second-class citizens, but AWS may have aimed too low with SageMaker, said Holger Mueller, principal analyst at Constellation Research in San Francisco. He said he believes it’s more of a kit to empower business users to create machine learning applications.

Other AWS AI-based services

Other AI-enabled AWS services unveiled this week include Amazon Comprehend, a managed natural language processing service for documents or other textual data that integrates with other AWS services to provide analytics, and Amazon Rekognition Video, which can track people and recognize faces and objects in videos stored in Amazon S3.

There are two services now in preview — Amazon Transcribe, which lets developers turn audio files into punctuated text, and Amazon Translate, which uses neural machine translation techniques to translate text from one language to another. Translate currently supports English and six other languages — Arabic, French, German, Portuguese, Simplified Chinese and Spanish — with more languages to come in 2018.

Third-party E911 services expand call management tools

Organizations are turning to third-party E911 services to gain management capabilities they can’t get natively from their IP telephony provider, according to a report from Nemertes Research.

IP telephony providers may offer basic 911 management capabilities, such as tracking phone locations, but organizations may have needs that go beyond phone tracking. The report, sponsored by telecom provider West Corporation, lists the main reasons why organizations would use third-party E911 services.

Some organizations may deploy third-party E911 management for call routing to ensure an individual 911 call is routed to the correct public safety answering point (PSAP). Routing to the correct PSAP is difficult for organizations with remote and mobile workers. But third-party E911 services can offer real-time location tracking of all endpoints and use that information to route to the proper PSAP, according to the report.

Many larger organizations have multivendor environments that may include multiple IP telephony vendors. Third-party E911 services offer a single method of managing location information across endpoints, regardless of the underlying telephony platform.

The report also found third-party E911 management can reduce costs for organizations by automating the initial setup and maintenance of 911 databases in the organization. Third-party E911 services may also support centralized call routing, which could eliminate the need for local PSTN connections at remote sites and reduce the operating and hardware expenses at those sites.

Genesys unveils Amazon integration

Contact center vendor Genesys, based in Daly City, Calif., revealed an Amazon Web Services partnership that integrates AI and Genesys’ PureCloud customer engagement platform.

Genesys has integrated PureCloud with Amazon Lex, a service that lets developers build natural language, conversational bots, or chatbots. The integration allows businesses to build and maintain conversational interactive voice response (IVR) flows that route calls more efficiently.

Amazon Lex helps IVR flows better understand natural language by enabling IVR flows to recognize what callers are saying and their intent, which makes it more likely for the call to be directed to the appropriate resource the first time without error.

The chatbot integration also allows organizations to consolidate multiple interactions into a single flow that can be applied over different self-service channels. This reduces the number of call flows that organizations need to maintain and can simplify contact center administration.

The chatbot integration will be available to Genesys customers in 2018.

Conference calls face user, security challenges

A survey of 1,000 professionals found that businesses in the U.S. and U.K. are losing $34 billion due to delays and distractions during conference calls, a significant increase from $16 billion in a 2015 survey.

The survey found employees waste an average of 15 minutes per conference call getting it started and dealing with distractions. More than half of respondents said distractions have a moderate-to-major negative effect on productivity, enthusiasm to participate and the ability to concentrate.

The survey was conducted by remote meetings provider LoopUp and surveyed 1,000 professionals in the U.S. and U.K. who regularly participate in conference calls at organizations ranging from 50 to more than 1,000 employees.

The survey also found certain security challenges with conference calls. Nearly 70% of professionals said it’s normal to discuss confidential information over a call, while more than half of respondents said it’s normal to not know who is on a call.

Users are also not fully comfortable with video conferencing, according to the survey. Half of respondents said video conferencing is useful for day-to-day calls, but 61% still prefer to use the phone to dial in to conference calls.

Announcing Azure Location Based Services public preview

Today we announced the Public Preview availability of Azure Location Based Services (LBS). LBS is a portfolio of geospatial service APIs natively integrated into Azure that enable developers, enterprises and ISVs to create location aware apps and IoT, mobility, logistics and asset tracking solutions. The portfolio currently comprises of services for Map Rendering, Routing, Search, Time Zones and Traffic. In partnership with TomTom and in support of our enterprise customers, Microsoft has added native location capabilities to the Azure public cloud.

Azure LBS has a robust set of geospatial services atop a global geographic data set. These services are comprised of 5 primary REST services and a JavaScript Map Control. Each service has a unique set of capabilities atop of the base map data and are built in unison and in accordance with Azure standards making it easy to work interoperable between the services. Additionally, Azure LBS is fully hosted and integrated into the Azure cloud meaning the services are compliant with all Azure fundamentals for privacy, usability, global readiness, accessibility and localization. Users can manage all Azure LBS account information from within the Azure portal and billed like any other Azure service.

Azure LBS uses key-based authentication. To get a key, go to the Azure portal and create and Azure LBS account. By creating an Azure LBS account, you automatically generate two Azure LBS keys. Both keys will authenticate requests to the various Azure LBS services. Once you have your account and your keys, you’re ready to start accessing Azure Location Based Services. And, the API model is simple to use. Simply parameterize your URL request to get rich responses from the service:

Sample Address Search Request: atlas.microsoft.com/search/address/json?api-version=1&query=1 Microsoft Way, Redmond, WA

Azure LBS enters public preview with five distinct services. Render (for maps), Route (for directions), Search, Time Zones and Traffic and a JavaScript Map Control. Each of these services are described in more detail below.

Azure Map Control

The Azure Map Control is a JavaScript web control with built-in capabilities for fetching Azure LBS vector map tiles, drawing data atop of it and interacting with the map canvas. The Azure Map Control allows developers to layer their data atop of Azure LBS Maps in both vector and raster layers meaning if enterprise customers have coordinates for points, lines and polygons or if they have geo-annotated maps of a manufacturing plant, a shopping mall or a theme park they can overlay these rasterized maps as a new layer atop of the Azure Map Control. The map control has listeners for clicking the map canvas and getting coordinates from the pixels allowing customers to send those coordinates to the services for searching for businesses around that point, finding the nearest address or cross street to that point, generating a route to or from that point or even connecting to their own database of information to find geospatially referenced information important to their business that is near that point.

Azure Location Based Services Map Control

The Azure Map Control makes it simple for developers to jumpstart their development. By adding a few lines of code to any HTML document, you get a fully functional map.



  

   Hello Azure LBS
    
     
      
     

     

In the above code sample, be sure to replace [AZURE_LBS_KEY] with your actual Azure LBS Key created with your Azure LBS Account in the Azure portal.

Render Service

The Azure LBS Render Service is use for fetching maps. The Render Service is the basis for maps in Azure LBS and powers the visualizations in the Azure Map Control. Users can request vector-based map tiles to render data and apply styling on the client. The Render Service also provides raster maps if you want to embed a map image into a web page or application. Azure LBS maps have high fidelity geographic information for over 200 regions around the world and is available in 35 languages and two versions of neutral ground truth.

Azure Location Based Services Render Service

The Azure LBS cartography was designed from the ground up and created with the enterprise customer in mind. There are lower amounts of information at lower levels of delineation (zooming out) and higher fidelity information as you zoom in. The design is meant to inspire enterprise customers to render their data atop of Azure LBS Maps without additional detail bleeding through disrupting the value of customer data.

Routing Service

The Azure LBS Routing Service is used for getting directions, but not just point A to point B directions. The Azure LBS Routing Service has a slew of map data available to the routing engine allowing it to modify the calculated directions based on a variety of scenarios.  First, the Routing Service provides customers the standard routing capabilities they would expect with a step-by-step itinerary. The calculation of the route can use the faster, shortest or avoiding highly congested roads or traffic incidents. For traffic-based routing, this comes in two flavors: “historic” which is great for future route planning scenarios when users would like to have a general idea of what traffic tends to look like on a given route; and, “live” which is ideal for active routing scenarios when a user is leaving now and wants to know where traffic exists and the best ways to avoid it.

Azure LBS Routing will allow for commercial vehicle routing providing alternate routes made just for trucks. The commercial vehicle routing supports parameters such as vehicle height, weight, the number of axels and hazardous material contents all to choose the best, safest and recommend roads for transporting their haul. The Routing Service provides a variety of travel modes, including walking, biking, motorcycling, taxiing or van routing.

Azure Location Based Services Route Service

Customers can also specify up to 50 waypoints along their route if they have pre-determined stops to make. If customers are looking for the best order in which to stop along their route, they can have Azure LBS determine the best order in which to route to multiple stops by passing up to 20 waypoints into the Routing Service where an itinerary will be generated for them.

Using the Azure LBS Route Service, customers can also specify arrival times when they need to be at a specific location by a certain time. Using the massive amount of traffic data, almost a decade of probes captured per geometry and high frequency intervals Azure LBS can let customers know given day or the week and time when is the best time of departure. Additionally, Azure LBS can use current traffic conditions to notify customers of a road change that may impact their route and provide updated times and/or alternate routes.

Azure LBS can also take into considering the engine type being used. By default, Azure LBS assumes a combustion engine is being used; however, if an electrical engine is in use Azure LBS will accept input parameters for power settings and generate the most energy efficient route.

The Routing Services also allows for multiple, alternate routes to be generated in a single query. This will save on over the wire transfer. Customers can also specify that they would like to avoid specific route types such as toll roads, freeways, ferries or carpool roads.

Sample Commercial Vehicle Route Request: atlas.microsoft.com/route/directions/json?api-version=1&query=52.50931,13.42936:52.50274,13.43872&travelMode=truck

Search Service

The Azure LBS Search Service provides the ability for customers to find real world objects and their respective location. The Search Service provides for three major functions:

  1. Geocoding: Finding addresses, places and landmarks
  2. POI Search: Finding businesses based on a location
  3. Reverse Geocoding: Finding addresses or cross streets based on a location

Azure Location Based Services Search Service

With the Search Service, customers can find addresses and places from around the world. Azure LBS supports address level geocoding in 38 regions, cascading to house numbers, street-level and city level geocoding for other regions of the world. Customers can pass addresses into the service based in a structured address form; or, they can use an unstructured form when they want to allow for their customers to search for addresses, places or business in a single query. Users can restrict their searches by region or bounding box and can query for a specific coordinate to influence the search results to improve quality. Reverse the query to provide a coordinate, say from a GPS receiver, customers can get the nearest address or cross street returned from the service.

The Azure LBS Search Service also allows customers to query for business listings. The Search Service contains hundreds of categories and hundreds of sub-categories for finding businesses or points of interest around a specific point or within a bounding area. Customers can query for businesses based on brand name or general category and filter those results based on location, bounding box or region.

Sample POI Search Request (Key Required): atlas.microsoft.com/search/poi/category/json?api-version=1&query=electric%20vehicle%20station&countrySet=FRA

Time Zone Service

The Azure LBS Time Zone Service is a first of it’s kind providing the ability to query time zones and time for locations around the world. Customers can now submit a location to Azure LBS and receive the respective time zone, the respective time in that time zone and the offset to Coordinated Universal Time (UTC). The Time Zone Service provides access to historical and future time zone information including changes for daylight savings. Additionally, customers can query for a list of all the time zones and the current version of the data – allowing customers to optimize their queries and downloads. For IoT customers, the Azure LBS Time Zone Service allows for POSIX output, so users can download information to their respective devices that only infrequently access the internet. Additionally, for Microsoft Windows users, Azure LBS can transform Windows time zone IDs to IANA time zone IDs.

Sample Time Zone Request (Key Required): atlas.microsoft.com/timezone/byCoordinates/json?api-version=1&query=32.533333333333331,-117.01666666666667

Traffic Service

The Azure LBS Traffic Service provides our customers with the ability to overlay and query traffic flow and incident information. In partnership with TomTom, Azure LBS will have access to a best in class traffic product with coverage in 55 regions around the world. The Traffic Service provides the ability to natively overlay traffic information atop of the Azure Map Control for a quick and easy means of viewing traffic issues. Additionally, customers have access to traffic incident information – real time issues happening on the road and collected through probe information on the roads. The traffic incident information provides additional detail such as the type of incident and the exact location. The Traffic Service will also provide our customers with details of incidents and flow such as the distance and time from one’s current position to the “back of the line;” and, once a user is in the traffic congestion the distance and time until they’re out of it.

Azure Location Based Services Traffic Service

Sample Traffic Flow Segment Request: atlas.azure-api.net/traffic/flow/segment/json?api-version=1&unit=MPH&style=absolute&zoom=10&query=52.41072,4.84239

Azure Location Based Services are available now in public preview via the Azure portal. Get your account created today.

Cloud-hosted apps catching on to meet user demand

With a variety of available services, now’s a good time for IT administrators to consider whether cloud-hosted apps are a good option.

Offerings such as Citrix XenApp Essentials and Amazon Web Services (AWS) AppStream allow IT to stream desktop applications from the cloud to users’ endpoints. Workflow and automation provider IndependenceIT also has a new offering, AppServices, based on Google Cloud Platform. Organizations adopt these types of services to get benefits around centralized management, scalability and more.

More organizations are considering cloud-hosted apps, because IT needs to become a service provider to meet the growing application demands of both external customers and internal users, said Agatha Poon, research director at 451 Research.

“You get requirements from different teams, and all want to have quicker ways to get applications, quicker ways to deploy services,” Poon said. “So, then, you need some sort of mechanism … to support that.”

What application hosting services offer

Application streaming services are an alternative to on-premises application virtualization, in which organizations host applications in their own data centers.

XenApp Essentials and AppStream place an application in the cloud and let the IT admins assign a group of users to it. But just delivering applications through the cloud is not enough, and the app hosting service should also provide a way to manage the lifecycle of the app publishing. Thus, IT is left with connecting data assets to the app and has to set controls for where users are allowed to move the data.  

Some app streaming services require organizations to use another set of tools for those management tasks. For example, in the case of AWS, IT must manually configure storage using the Amazon Simple Storage Service and connect it back to AppStream if they want additional storage.  

Swizznet, a hosting provider for accounting applications, adopted Independence IT’s AppServices in September 2016 to deliver apps internally and to customers. The company moved away from XenApp Essentials because IndependenceIT provided more native management capabilities, said Mike Callan, CEO of Swizznet, based in Seattle.

We wanted the ability to automatically scale and spin additional servers.
Mike CallanCEO, Swizznet

“We wanted the ability to automatically scale and spin additional servers, where we could essentially have that capability automated instead of paying engineers to do that,” Callan said.

Citrix’s business problems over the past few years were also a factor in making the switch, Callan said.

“Citrix is, unfortunately, just a company more or less in disarray, so they haven’t been able to keep up with the value proposition that they once had,” he added.

Application streaming services can also help organizations deliver apps that they don’t have the resources to host on-premises. Cornell University has used Amazon AppStream 2.0 since early 2017 and took advantage of the new GPU-powered features that aim to reduce the cost of delivering graphics-intensive apps.

These features have opened up more kinds of software that Cornell can deliver to students, said Marty Sullivan, a DevOps cloud engineer at the university in Ithaca, N.Y. Software such as ANSYS, Dassault Systemes Solidworks, and Autodesk AutoCAD and Inventor help students and faculty run simulations and design mechanical parts, but they only perform well when a GPU is available.

“[Departments] will be able to deliver these specialized pieces of software without having to redevelop them for another platform,” Sullivan said.

Google Cloud Platform
The different services within Google Cloud Platform

Cloud market pushes app hosting forward

When it comes to cloud infrastructure services, Google ranked third behind AWS and Microsoft in Gartner’s 2017 Magic Quadrant. But Google Cloud Platform made the deployment of AppServices easy, Callan said. He was able to go through the auto-deployment quick-start guide and set it up himself in just a couple days.

The increasing reliance on cloud services and the rise of workers using multiple devices to get their jobs done are driving the app streaming trend. Providing company-approved cloud-hosted apps for such employees makes deployment and management easier. IT admins don’t have to physically load any apps on the machines, nor do the employees with the machines need to be present for IT to keep tabs on the usage and security of those apps.

Quorum OnQ solves Amvac Chemical’s recovery problem

Using a mix of data protection software, hardware and cloud services from different vendors, Amvac Chemical Corp. found itself in a cycle of frustration. Backups failed at night, then had to be rerun during the day, and that brought the network to a crawl.

The Los Angeles-based company found its answer with Quorum’s one-stop backup and disaster recovery appliances. Quorum OnQ’s disaster recovery as a service (DRaaS) combines appliances that replicate across sites with cloud services.

The hardware appliances are configured in a hub-and-spoke model with an offsite data center colocation site. The appliances perform full replication to the cloud that backs up data after hours.

“It might be overkill, but it works for us,” said Rainier Laxamana, Amvac’s director of information technology.

Quorum OnQ may be overkill, but Amvac’s previous system underwhelmed. Previously, Amvac’s strategy consisted of disk backup to early cloud services to tape. But the core problem remained: failed backups. The culprit was the Veritas Backup Exec applications that the Veritas support team, while still part of Symantec, could not explain. A big part of the Backup Exec problem was application support.

“The challenge was that we had different versions of an operating system,” Laxamana said. “We had legacy versions of Windows servers so they said [the backup application] didn’t work well with other versions.

“We were repeating backups throughout the day and people were complaining [that the network] was slow. We repeated backups because they failed at night. That slowed down the network during the day.”

We kept tapes at Iron Mountain, but it became very expensive so we brought it on premises.
Rainier Laxamanadirector of information technology, Amvac

Quorum OnQ provides local and remote instant recovery for servers, applications and data. The Quorum DRaaS setup combines backup, deduplication, replication, one-click recovery, automated disaster recovery testing and archiving. Quorum claims OnQ is “military-grade” because it was developed for U.S. Naval combat systems and introduced into the commercial market in 2010.

Amvac develops crop protection chemicals for agricultural and commercial purposes. The company has a worldwide workforce of more than 400 employees in eight locations, including a recently opened site in the Netherlands. Quorum OnQ protects six sites, moving data to the main data center. Backups are done during the day on local appliances. After hours, the data is replicated to a DR site and then to another DR site hosted by Quorum.

“After the data is replicated to the DR site, the data is replicated again to our secondary DR site, which is our biggest site,” Laxamana said. “Then the data is replicated to the cloud. So the first DR location is our co-located data center and the secondary DR our largest location. The third is the cloud because we use Quorum’s DRaaS.”

Amvac’s previous data protection configuration included managing eight physical tape libraries.

“It was not fun managing it,” Laxamana said. “And when we had legal discovery, we had to go through 10 years of data. We kept tapes at Iron Mountain, but it became very expensive so we brought it on premises.”

Laxamana said he looked for a better data protection system for two years before finding Quorum. Amvac looked at Commvault but found it too expensive and not user-friendly enough. Laxamana and his team also looked at Unitrends. At the time, Veeam Software only supported virtual machines, and Amvac needed to protect physical servers. Laxamana said Unitrends was the closest that he found to Quorum OnQ.

“The biggest (plus) with Quorum was that the interface was much more user-friendly,” he said. “It’s more integrated. With Unitrends, you need a third party to integrate the Microsoft Exchange.”

Salesforce Lightning Bolt strikes Appirio as vertical plan

Appirio, a cloud services provider, rolled out prebuilt offerings on Salesforce Lightning at Dreamforce 2017, which wrapped up this week in San Francisco.

The Wipro company’s prebuilt wares, referred to as Salesforce Lightning Bolt solutions in the SaaS vendor’s terminology, target the retail and healthcare vertical markets and employee engagement as a horizontal market.

Salesforce Lightning, the vendor’s platform-wide upgrade, debuted two years ago, and customer migration to the environment was a key theme at Dreamforce 2017. The Salesforce Lightning Bolt approach, which the company unveiled in 2016, lets Salesforce partners create new communities, portals or websites that integrate with Salesforce customer relationship management. Salesforce intends for the Lightning Bolts to be reusable, industry-specific offerings.

The ability to develop custom, repeatable Salesforce Lightning Bolt solutions appeals to channel partners.

“That is why [Salesforce Lightning Bolt] resonates with us and other systems integrators,” said Yoni Barkan, director of solutions and innovation at Appirio, based in Indianapolis. “These are solutions that we feel show our industry expertise in a particular area and allow us to leverage that.”

These are solutions that we feel show our industry expertise in a particular area and allow us to leverage that.
Yoni Barkandirector of solutions and innovation at Appirio

Appirio’s Bolt lineup includes a retail and franchise offering that aims to facilitate collaboration among franchisors and franchisees, home offices and retail partners. Barkan said the Lightning Bolt lets participants view their own branding — a franchisee’s or home office’s branding, for example — while participating in the Salesforce-based community.

Also in the retail and franchise space, Appirio’s promotion management Lightning Bolt provides a collaboration platform for managing new promotions and brand updates. Barkan said the promotion use case stems from work the company has undertaken with a global franchise company.

A medical device ordering and sales Lightning Bolt, meanwhile, seeks to bolster communication and collaboration among manufacturers, distributors and suppliers. And an employee community and social intranet Salesforce Lightning Bolt targets employee engagement. For that Bolt, Appirio is partnering with vendors such as Stantive Technologies Group, which provides the OrchestraCMS content management system.

Green House Data, Ingram in acquisition mode

Green House Data, a cloud hosting and managed services company based in Cheyenne, Wyo., has acquired Ajubeo, an infrastructure-as-a-service (IaaS) provider based in Boulder, Colo.

The purchase provides Green House Data a presence in the Rocky Mountain region, adding to its geographic expansion. The company in April 2017 purchased IaaS provider Cirracore, which has operations in Atlanta and Dallas.

“We’ve largely based acquisition and expansion strategies around a combination of customer demand, geodiversity and, of course, market opportunity,” said Shawn Mills, president and CEO at Green House Data. “For example, we acquired into Atlanta earlier this year because 75% of Fortune 1000 companies have some kind of presence in this market, so it is advantageous to both our clients and to organic growth.”

From the cloud services point of view, Green House Data’s Ajubeo purchase was also driven by the latter company’s private cloud practice, Mills noted.

Cloud services companies are acquiring other firms to shore up skills in an increasingly multi-cloud environment. Mills said multi-cloud customer engagements are “more and more the norm than an anomaly.”

Ingram Micro Inc., meanwhile, acquired The Phoenix Group, a distributor of point-of-sale (POS) technology for the U.S. and Canadian electronic payments markets. According to Ingram Micro, The Phoenix Group’s management and associates will operate as a division of Ingram Micro.

Other news

  • Big Switch Networks, a data center networking vendor, launched a channel program the company said provides partner incentives, training, certification and enablement programs, along with professional services opportunities. The company’s channel partner program is based on two tiers: The Big Switch Authorized Partner tier is open to partners who are authorized networking resellers of Dell EMC or Hewlett Packard Enterprise. Channel partners can also qualify for that tier by signing a Big Switch VAR agreement for selling Edgecore and Big Switch offerings. The Big Switch Premier Partner tier is open to partner that meet the Authorized Partner requirements and also fulfill additional Premier Partner tier requirements that include maintaining a defined number of certified sales and engineering employees.
  • Datical, a provider of database release automation offerings, unveiled a partner program with the objective of connecting DevOps vendors and systems integrators. Program features include an online marketplace, access to training and joint marketing opportunities with Datical.
  • Pivot3, a hyper-converged infrastructure vendor, said the company witnessed a 178% increase in new deal registrations from channel partners from the first quarter to the third quarter of 2017. In addition, the company said 77 partners have joined the company’s channel roster in Q3.
  • Kryon Systems, a robotic process automation solutions provider, is partnering with MFX, an IT services provider for property and casualty insurance carriers, reinsurers and agents.
  • Ingenico Group, a POS and e-payment company, has selected Masergy, a hybrid networking and managed security provider, for its managed SD-WAN Pro offering.
  • World Wide Technology, an IT solution provider, opened a new global headquarters in the Maryland Heights, Mo., a suburb of St. Louis. The facility is 208,000 square feet, and it includes a 300-seat auditorium equipped with an LED screen that’s 51 feet by 12 feet.
  • BCS ProSoft, a business software and technology consulting firm, has appointed Sally Craig as its vice president of sales and marketing. Craig was previously a channel executive at ERP vendor Epicor.