Tag Archives: Services

The sky’s the limit with 5G wireless communication

We all know of institutions that are too big to fail. What about communication services in which failure is simply not an option?

According to many analysts and industry experts, the answer lies in 5G wireless communication.

Imagine smart roads that can alert municipalities when it’s time to plow snow or repair potholes. Or consider an electric grid knowing about a catastrophic power failure before it happens and fixing the problem in real time. Better yet, think about a person in need of life-saving surgery miles from the closest surgeon, who nevertheless receives the procedure through the use of telemedicine, where the surgeon uses computing power and robotics to control the operation.

In each case, a failure in communication might be disastrous. But with next-generation 5G wireless communication getting closer, faster broadband wireless speeds and increased capacity will deliver information better than it can with today’s 4G networks.

Therein lies the greatest excitement about 5G; entirely new business applications will be possible, according to Craig Mathias, principal analyst with Farpoint Group, a wireless and mobile advisory firm in Ashland, Mass. While a lot of new technology is being developed, “the business model impact is going to be more interesting than the technology,” he said.

The required 5G technology is nothing minor, however. A host of network trials, research and development of 5G devices, standards and specifications development, and government policies for 5G infrastructure buildouts stand between the promise and the reality of 5G technology. This year, carriers like Verizon and AT&T are expanding 5G tests in cities across the U.S. and globally. Verizon is also showcasing its 5G development during the 2018 Winter Olympics. In addition, organizations are working in partnership with telecommunications companies to develop 5G devices and the applications to be used on them.

That said, 5G wireless communication is coming, and enterprises are paying attention. The ones thinking ahead are putting plans in place so they’re prepared to take advantage of 5G technology when it’s commercially available. In this issue of Network Evolution, we explore what companies are doing to be 5G-ready.

It won’t happen overnight, much less in 2018. But this is the year for enterprises to sit up, take notice and start planning if they haven’t yet.

Also in this issue, network managers share their reasons for choosing smaller software-defined networking vendors instead of major providers like Cisco or VMware. In addition, one unified communications expert offers his tips for consolidating UC features and the pros and cons of deploying UC across the enterprise. Finally, in this month’s Subnet, learn about Durham County, N.C.’s, plan to create a more seamless experience through network automation.

SAP offers extra help on HR cloud migrations

SAP recently launched a program that offers services and tools to help with an HR cloud migration. The intent is to help HR managers make a business case and to ease some of the initial integration steps.

 SAP has seen rapid growth of its SuccessFactors cloud human capital management platform. But the firm has some 14,000 users of its older on-premises HCM suite, mostly in Europe, who have not fully migrated. Some are in a hybrid model and have been using parts of SuccessFactors.

Customers may feel “a lot of trepidation” over the initial HR cloud migration steps, said Stephen Spears, chief revenue officer at SAP. He said SAP is trying to prove with its new Upgrade2Success program “that it’s not difficult to go from their existing HR, on-premises environment to the cloud.”

The problems that stand in the way of an HR cloud migration may be complicated, especially in Europe.

HR investment remains strong

The time may be right for SAP to accelerate its cloud adoption efforts. HR spending remains strong, said analysts, and users are shifting work to HR cloud platforms.

If I were a cloud HR systems provider, I would be very excited for the future, at least in North America.
David Wagnervice president of research, Computer Economics

IDC said HCM applications are forecast to generate just over $15 billion in revenues globally this year, up 8% over 2017. This does not include payroll, just HCM applications, which address core HR functions such as personnel records, benefits administration and workforce management.

The estimated 2018 growth rate is a bit below prior year-over-year growth, which was 9% to 10%, “but still quite strong versus other back office application areas,” said Lisa Rowan, an IDC analyst. Growth is being driven in part by strong interest in replacing older on-premises core HR systems with SaaS-based systems, she said.

Cloud adoption for HR is strong in U.S.

Computer Economics, a research and consulting firm, said that in terms of organizational spending priorities HR is “right down the middle” of the 14 technologies it tracks, said David Wagner, vice president of research at the firm. It surveyed 220 companies ranging from $50 million to multibillion-dollar firms.

“Investment is higher in everything this year,” Wagner said, but IT operational budgets are not going up very fast and the reason is the cloud transition. Organizations are converting legacy systems to cloud systems and investing the savings back into the IT budget. “They’re converting to the cloud as fast as is reasonable in organizations right now,” he said.

“If I were a cloud HR systems provider, I would be very excited for the future, at least in North America,” Wagner said.

Cloud adoption different story in Europe

But Europe, where SAP has about 80% of its on-premises users, may be a different story.

Wagner, speaking generally and not specific to SAP, said the problem with cloud adoption in Europe is that there are much more stringent compliance rules around data in the cloud. There’s a lot of concern about data crossing borders and where it’s stored, and how it’s stored and encrypted. “Cloud adoption in general in Europe is behind North America because of those rules,” he said.

SAP’s new cloud adoption program brings together some services and updated tools that help customers make a business case, demonstrate the ROI and help with data integration. It takes on some of the work that a systems integrator might do.

Charles King, an analyst at Pund-IT, said SAP is aiming to reduce the risk and uncertainties involved in a sizable project. 

“That’s a wise move since cost, risk and uncertainty is the unholy trinity of bugaboos that plague organizations contemplating such substantial changes,” King said.

Partners plan for AWS Solution Provider Program launch

Partners are reacting positively to the announcement by Amazon Web Services that it is refreshing its channel reseller program in 2018 to include more incentives, AWS competencies and a new tiered structure. The program has also been rebranded as the AWS Solution Provider Program.

We believe this change will have a net-positive impact overall in our ability to provide more value to our customers, while being able to capture greater opportunity versus risk.
Keith ArcherCOO, Reliam

The move by the cloud giant, revealed at its 2017 re:Invent conference, “appears to be a positive overall change to a very simplistic reseller model that didn’t distinguish between capabilities or competencies while providing additional tiered incentives and support,” said Keith Archer, COO at Reliam, a Los Angeles-based managed cloud services provider. “We believe this change will have a net-positive impact overall in our ability to provide more value to our customers, while being able to capture greater opportunity versus risk.”

Rackspace, a managed cloud provider headquartered in Windcrest, Texas, is often involved in contract negotiations between its customers and AWS, said Sara Doyle, the company’s strategic alliance manager. Enterprise agreements and enterprise discount programs “can be complex to navigate through,” she noted. “With the promise of more leeway in these negotiations, it can mean faster time to onboarding for our customers and AWS.”

Andy Jassy at AWS re:Invent
Andy Jassy at re:Invent 2017 session.

Differentiating through AWS competencies

Reliam wants to grow into a Premier-level partner role and expand its AWS relationships across services, AWS competencies and verticals, Archer said. “We are also looking to actively partner directly with AWS regionally and nationally to engage and support customers looking for greater depth of service and expertise in support of their cloud endeavors.”

Franz Karlsberger, global head of strategic technology alliances and ecosystems at Dynatrace, an application performance management firm based in Waltham, Mass., said the company will be looking to strengthen its partnership with AWS this year, including taking advantage of the new programs, he said.

[embedded content]

Watch the 2017 re:Invent session
about growing an AWS partner business.

“AWS Marketplace is an especially interesting channel for Dynatrace,” he said. “It gives our customers [and] prospects a seamless, user-friendly experience deploying application performance monitoring on AWS,” he said, by “reducing some of the steps usually involved in the procurement, deployment and billing of SaaS [software as a service] solutions.”

New AWS competencies around IoT, containers, machine learning and blockchain will be a key differentiator for Dynatrace, which is constantly investing in those areas, Karlsberger added.

Rackspace will also focus on obtaining additional AWS competencies in networking and machine learning and expanding its enterprise customer portfolio in the coming year, Doyle said. “By attaining AWS competencies, we are able to easily show our value as the AWS experts for our customers.”

Who will benefit from AWS Solution Provider Program?

The partners that stand to benefit the most from the AWS Solution Provider Program are those that are serious about working with AWS and hold key competencies or services like managed service partner or DevOps partner, Archer said. “Those partners that can successfully attain [new] competencies will benefit the most, and be able to take advantage of all the opportunities the program provides.”

Doyle thinks AWS reseller and SaaS provider partners will see the greatest benefits. The resellers will benefit from the leniency on contract terms, she said, and SaaS providers with the extension of AWS PrivateLink.

The new and improved AWS Solution Provider Program is a step in the right direction, Doyle said. “Any updates or revamping of the AWS partner programs [are] crucial for AWS customers, since the marketplace for tools and services on AWS aid in providing top-level support for the AWS community. By continuing to better enable their partners, AWS is also driving success for their customers.”

AT&T 5G headed for 12 U.S. markets this year

AT&T plans to introduce fifth-generation, or 5G, mobile services in a dozen markets by the end of the year, as it aims to become the first U.S. carrier to offer the high-speed wireless network.

The rollout of the AT&T 5G services was sped up by the recent completion of new standards, the company said. In December, international wireless standards body 3GPP finished the new radio specifications that define radio access to the network.

The completed standards provide the specs device and chipset manufacturers need to build 5G products capable of handling data speeds of up to 10 Gbps — 10 to 20 times faster than the current 4G networks. In a statement, AT&T said it’s “confident this latest standards milestone will allow us to bring 5G to market faster.”

Verizon plan differs from AT&T 5G strategy

AT&T rivals Verizon, T-Mobile and Sprint also plan to offer 5G mobile services. However, the companies, including AT&T, haven’t described in detail the services they would provide.

While AT&T focuses on mobile, Verizon has aimed its initial 5G work at residential broadband services, which the company plans to launch in five markets this year. The higher-frequency range of 5G makes it possible for service providers to deliver high-speed internet to homes wirelessly.

Fifth-generation is expected to support tens of millions of new broadband connections at 50 Mbps or more. The higher speeds on fixed and mobile 5G services can power virtual reality applications, driverless cars and 4K streaming video.

While preparing AT&T 5G services for consumers, the company plans to test the technology with businesses across industries. AT&T said the lower latency of 5G would make it useful in edge computing, an architecture designed for the internet of things.

Despite the ongoing 5G rollouts, carriers are not expected to deliver wide-scale services until at least 2020. Manufacturers will need time to build support in devices, and most service providers are content to wait until they reap the full return on 4G investments.

Top cloud providers dominate headlines in 2017

It’s no surprise that top cloud providers, Amazon Web Services, Microsoft Azure and Google, continued to dominate technology headlines in 2017. This year, we saw these cloud giants perform the same one-upmanship around tools, services and prices that we have in the past — but this time, with a sharper focus on technologies such as containers and hybrid cloud.

Before you head into 2018, refresh your memory of SearchCloudComputing’s top news from the past year:

Amazon, Microsoft crave more machine learning in the cloud

All the top cloud providers see the importance in machine learning, and Amazon Web Services and Microsoft Azure put their differences aside in October to jointly create Gluon, an open source deep learning interface based on Apache MXNet. This new library is intended to make AI technologies more accessible to developers and help them more easily create machine learning models. In the future, Gluon will work worth Microsoft Cognitive Toolkit.

Meanwhile, Google Cloud Platform offers TensorFlow, another open source library for machine learning. While TensorFlow is a formidable opponent, some developers shy away from it due to its complexities.

The main problem that all providers face in this space is that the public cloud isn’t always the best environment for complex machine learning workloads due to cost, data gravity or a lack of skill. Some data scientists continue to use the public cloud to test, but then run the workloads on premises.

Google hybrid cloud strategy crystallizes with Nutanix deal

While cloud is popular, many workloads are still kept on premises — either due to their design or compliance issues. Top cloud providers continue to seek partnerships to target the hybrid market and ease the gap between data centers and the cloud.

The Amazon and VMware deal tends to be the most common example of this. But in June 2017, Google partnered with Nutanix to fuel its own hybrid efforts. Next year, customers will be able to manage and deploy workloads between the Google public cloud and their own hyper-converged infrastructure from a single interface. This partnership will also extends Google cloud services, such as BigQuery, to Nutanix customers, and enable customers to use Nutanix boxes as edge devices.

Kubernetes on Azure hints at hybrid cloud endgame

One of containers’ main advantages is enhanced portability between cloud platforms — a feature that’s especially attractive to hybrid cloud users. In February 2017, Microsoft unveiled the general availability of Kubernetes on Azure Container Service (AKS, formerly ACS), making it the first public cloud provider to support all the major container orchestration engines: Kubernetes, Mesosphere’s DC/O and Docker Swarm.

The move was one that could especially benefit hybrid cloud users because both Docker Swarm and Kubernetes enable teams to manage containers that run on multiple platforms from a single location. In October, Azure rolled out a new managed Kubernetes service, and rebranded ACS as AKS. AWS countered in November with Amazon Elastic Container Service for Kubernetes, a managed service.

Azure migration takes hostile approach to lure VMware apps

To compete with VMware Cloud on AWS, Microsoft released a similar service for Azure in November 2017 — without VMware support.

Azure Migrate enables enterprises to analyze their on-premises environment, discover dependencies and more easily migrate VMware workloads into the Azure public cloud. A bare-metal subset of the service, VMware virtualization on Azure, is expected to be available in 2018, and enables users to run a VMware stack on top of Azure hardware. While the service is based on a partnership with unnamed VMware partners, and involves VMware-certified hardware, the development of it didn’t directly involve VMware itself, and cuts the vendor out of potential revenues. VMware has since said that it will not recommend or support the product.

Cloud pricing models reignite IaaS provider feud

The price war continued in 2017, but top cloud providers changed their tune: instead of direct cuts, they altered their pricing models. AWS abandoned its per-hour billing, in favor of per-second billing, to counter per-minute billing from Google and Azure. Google shortly responded with its own shift to a per-second billing model.

Microsoft, for its part, added a Reserved VM Instances option to Azure, which provides discounts to customers that purchase compute capacity in advance for a one- or three-year period. The move was a most direct shot at AWS’ Elastic Compute Cloud Reserved Instances, which follow a similar model.

Monitor Active Directory replication via PowerShell

breaks down, administrators need to know quickly to prevent issues with the services and applications that Active Directory oversees.

It is important to monitor Active Directory replication to ensure the process remains healthy. Larger organizations that use Active Directory typically have several domain controllers that rely on replication to synchronize networked objects — users, security groups, contacts and other information — in the Active Directory database. Changes in the database can be made at any domain controller, which must then be duplicated to the other domain controllers in an Active Directory forest. If the changes are not synchronized to a particular domain controller — or all domain controllers — in an Active Directory site, users in that location might encounter problems.

For example, if an administrator applies a security policy setting via a Group Policy Object to all workstations, all domain controllers in a domain should pick up the GPO changes. If one domain controller in a particular location fails to receive this update, users in that area will not receive the security configuration.

Why does Active Directory replication break?

Active Directory replication can fail for several reasons. If network ports between the domain controllers are not open or if the connection object is missing from a domain controller, then the synchronization process generally stops working.

Since domain controllers rely on the domain name system, if their service records are missing, the domain controllers will not communicate with each other, which causes a replication failure.

Check Active Directory replication status manually

There are many ways to check the Active Directory replication status manually.

Administrators can run the following string using the command-line repadmin utility to show the replication errors in the Active Directory forest:
repadmin /replsum /bysrc /bydest /errorsonly

Administrators can also use the Get-ADReplicationPartnerMetadata PowerShell cmdlet to check the replication status, which is used in the script further in this article.

Use a script to check replication health

While larger organizations might have an enterprise tool, such as System Center Operations Manager, to monitor Active Directory, a PowerShell script can be a helpful supplement to alert administrators on the replication status. Because so much of a business relies on a properly functioning Active Directory system, it can’t hurt to implement this script and have it run every day via a scheduled task. If the script finds an error, it will send an alert via email.

The system must meet a few requirements before executing the script:

  • It runs on a computer that reaches all domain controllers.
  • It is recommended to use a computer that runs Windows Server 2012 R2 or a Windows 10 computer joined to a domain in the Active Directory forest.
  • The computer has the Active Directory PowerShell modules installed.

How does the script work?

The PowerShell script uses the Get-ADReplicationPartnerMetadata cmdlet, which connects to a primary domain controller emulator in the Active Directory forest and then collects the replication metadata for each domain controller.

The script checks the value of the LastReplicationResult attribute for each domain controller entry. If the value of LastReplicationResult is zero for any domain controller, the script considers this a replication failure. If this error is found, the script executes the Send-MailMessage cmdlet to send the email with a copy of the report file in a CSV file. The script stores the replication report in C:TempReplStatus.CSV.

The settings in the script should be modified to use the email address to send the message along with the subject line and message body.

PowerShell script to check replication status

The following PowerShell script helps admins monitor Active Directory for these replication errors and delivers the findings via email. Be sure to modify the email settings in the script.

$ResultFile = “C:TempReplStatus.CSV”

$ADForestName = “TechTarget.com”

$GetPDCNow =Get-ADForest $ADForestName | Select-Object -ExpandProperty RootDomain | Get-ADDomain | Select-Object -Property PDCEmulator

$GetPDCNowServer = $GetPDCNow.PDCEmulator

$FinalStatus=”Ok”

 

Get-ADReplicationPartnerMetadata -Target * -Partition * -EnumerationServer $GetPDCNowServer -Filter {(LastReplicationResult -ne “0”)} | Select-Object LastReplicationAttempt, LastReplicationResult, LastReplicationSuccess, Partition, Partner, Server | Export-CSV “$ResultFile” -NoType -Append -ErrorAction SilentlyContinue

 

$TotNow = GC $ResultFile

$TotCountNow = $TotNow.Count

IF ($TotCountNow -ge 2)

{

    $AnyOneOk = “Yes”

    $RCSV = Import-CSV $TestCSVFile

    ForEach ($AllItems in $RCSV)

    {

        IF ($AllItems.LastReplicationResult -eq “0”)

        {

            $FinalStatus=”Ok”

            $TestStatus=”Passed”

            $SumVal=””

            $TestText=”Active Directory replication is working.”

        }

        else

        {

            $AnyGap = “Yes”

            $SumVal = “”

            $TestStatus = “Critical”

            $TestText=”Replication errors occurred. Active Directory domain controllers are causing replication errors.”

            $FinalStatus=”NOTOK”           

            break

        }

    }

}

$TestText

 

IF ($FinalStatus -eq “NOTOK”)

{

    ## Since some replication errors were reported, start email procedure here…

 

### START – Modify Email parameters here

$message = @”                                

Active Directory Replication Status

 

Active Directory Forest: $ADForestName

                                  

Thank you,

PowerShell Script

“@

 

$SMTPPasswordNow = “PasswordHere”

$ThisUserName = “UserName”

$MyClearTextPassword = $SMTPPasswordNow

$SecurePassword = Convertto-SecureString –String $MyClearTextPassword –AsPlainText –force

$ToEmailNow =”EmailAddressHere”

$EmailSubject = “SubjectHere”

$SMTPUseSSLOrNot = “Yes”

$SMTPServerNow = “SMTPServerName”

$SMTPSenderNow = “SMTPSenderName”

$SMTPPortNow = “SMTPPortHere”

 

### END – Modify Email parameters here

 

$AttachmentFile = $ResultFile

 

$creds = new-object -typename System.Management.Automation.PSCredential -argumentlist “$ThisUserName”, $SecurePassword

Send-MailMessage -Credential $Creds -smtpServer $SMTPServerNow -from $SMTPSenderNow -Port $SMTPPortNow -to $ToEmailNow -subject $EmailSubject -attachment $AttachmentFile -UseSsl -body $message

}

When the script completes, it generates a file that details the replication errors.

Replication error report
The PowerShell script compiles the Active Directory replication errors in a CSV file and delivers those results via email.

Administrators can run this script automatically through the Task Scheduler. Since the script takes about 10 minutes to run, it might be best to set it to run at a time when it will have the least impact, such as midnight.

Job searches reveal top skills for tech positions

Job hunters who know the ins and outs of Amazon Web Services, Amazon’s cloud computing platform, have pretty bright prospects, according to new research by jobs website Indeed.com. So do those with Microsoft Azure skills.

People who can use the JavaScript library React are golden.

The report, published Wednesday, highlighted 15 IT skills that job seekers are betting will land them high-paying tech positions — and that employers want new hires to come equipped with.

The skills were culled from terms typed into the site’s search browser by job hunters and then matched against what terms employers looked for when combing through Indeed’s resume database.

React, a popular collection of JavaScript resources for building web UIs, was by far the most searched-for term, with a 313% increase in job seeker interest from October 2016 to September 2017 when compared with the same period a year earlier. A lot more employers were using the term to find new employees, too, with a 229% increase.

“React is becoming something that’s more crucial to the job search,” said Indeed economist Daniel Culbertson, who spearheaded the research. The JavaScript library is managed primarily by social media site Facebook, which he said bolsters its popularity. “It’s becoming a part of the job for more and more companies.”

A distant No. 2 was the term Amazon Web Services — the top-selling cloud infrastructure service — showing a 98% increase in interest among job seekers and a 40% jump among employers. No. 3 was Microsoft’s cloud service, Azure, with the number of job hunters using the term to search for tech positions rising 51% and employers 62%.

Search terms equal IT skills

Despite the gap between No. 1 React and the runners-up, AWS and Azure still grew by “leaps and bounds,” the report read, and the high rankings show the importance of cloud computing in the job market today.

“Cloud is gaining because businesses of all stripes are boosting their use of off-site computing and storage — and that’s making experienced cloud developers a must-have for many employers,” the report continued.

Other search terms on the list were Angular — or properly AngularJS, a Google-managed open source web application framework — Tableau, data visualization software; Spark, a data analytics engine; and programming language Python.

Culbertson based the report on search terms because the names of the numerous platforms and programming languages in technology can be easily classified as skills since “they’re very integral to jobs.” When users visit Indeed, they type in search terms that describe the skills they want to take with them to a new job.  

For the research, Culbertson looked at the activity of people searching for tech positions, then examined the searches that got them to the search results page. Then he whittled down the list of terms and classified them as skills.

“We wanted to see which of these tech skills are becoming more important to the job search, based on job search activity,” Culbertson said. “We thought this could serve as a barometer for how important these skills are becoming in general in the tech industry.”

Indeed doesn’t give out exact numbers of searches on terms or jobs people subsequently click on, Culbertson said. But with the site bringing in 200 million visitors a month, breaking down to millions of searches each day — and with technology becoming a more important part of the labor market — searches for tech positions is “a rather high volume,” he said.

The new language of tech?

One unexpected item in the list of tech job search terms was Mandarin, as in the primary language spoken in China, Culbertson said. There was a 49% increase in job-hunter interest in the use of the term.

“This isn’t necessarily a tech skill, but you could classify a language as a skill,” Culbertson said. “And I think it speaks to the fact that China is the second-biggest economy in the world.”

And it represents the impact China and its citizens, who are studying at U.S. universities in high numbers, are having on the U.S. technology industry, he said. Popular job postings people clicked on after searching on Mandarin as a keyword were product developers, language analysts and customer support specialists, the report read.

But employer interest in Mandarin as a search term was down 39%. It’s too early to determine a reason for the dip, Culbertson said, but it’s worth keeping an eye on in the future.

“My assumption is that this would be employers who are looking for people with Mandarin skills because of the amount of business that they do with China,” he said. “But it’s tough to say what would be behind the decline from this year to last year.”

Botched ERP implementation project leads to National Grid lawsuit

National Grid, an electric and gas utility company, has filed a lawsuit against IT services provider Wipro Ltd., alleging it delivered an ERP implementation project “that was of virtually no value to National Grid.” It said the contractor was paid $140 million for its work.

This lawsuit, filed Dec. 1 in the U.S. District Court in New York, described a series of problems with an SAP deployment. 

For instance, National Grid alleged the “new SAP system miscalculated time, pay rates and reimbursements, so that employees were paid too little, too much or nothing at all.” 

With respect to the supply chain functions, the ERP implementation project “devastated” the utility’s procurement, inventory and vendor payment processes. Two months after going live, “National Grid’s backlog of unpaid supplier invoices exceeded 15,000, and its inventory record keeping was in shambles.”

Wipro, a global IT services provider based in India, with about $8.5 billion in revenue and nearly 170,000 employees, quickly refuted the lawsuit’s allegations in a securities filing.

“National Grid has been a valued customer of Wipro in the U.S. and U.K. for several years,” the firm said in its filing. “Wipro strongly believes that the allegations misstate facts and the claims are baseless. Wipro will vigorously contest the allegation in court.”

Wipro said the ERP implementation project began in 2009 and had multiple vendors. The provider said it joined the project in 2010, and “the post go-live process was completed in 2014.”

“During the course of this ERP implementation project, National Grid gave Wipro many positive evaluations. Wipro also received an award from National Grid U.S. with respect to this project in 2014,” the firm said in its statement. 

It is not unusual to see a large ERP project end up in court. Earlier this year, MillerCoors filed a lawsuit against IT services firm HCL Technologies, an India-based IT services firm, over problems relating to a $100 million ERP implementation.

MillerCoors, in court papers, accused HCL of failing to provide leadership and to adequately staff the project. In its counterclaim, HCL said MillerCoors’ leadership team “did not understand the operations of their own business.”

National Grid is a multinational firm that provides utility services in the U.K. and in Massachusetts, New York and Rhode Island. The ERP deployment project began with the goal of upgrading back-office systems that run financials, HR, supply chain and procurement.

National Grid alleged that Wipro designed an “overly complex” SAP project.

“Rather than taking advantage of certain design and configuration options available within the out-of-the-box SAP software to minimize system complexity and reduce risk, Wipro’s inexperienced consultants engaged in excessive customization of the base SAP system,” according to the lawsuit.

The lawsuit claimed by September 2013, the continuing efforts to stabilize the new SAP system were costing approximately $30 million per month, totaling over $300 million.

National Grid did not respond by press time to a request for comment about the current usefulness of its SAP system.

AWS graph database Neptune sets sail at re:Invent 2017

Cloud computing leader Amazon Web Services’ re:Invent conference this week in Las Vegas saw a deluge of cloud and database announcements. Among those on the data side was Neptune, the company’s formal entry into the growing field of graph databases.

While this AWS graph database may have less immediate impact than Redshift, the influential cloud data warehouse it rolled out at re:Invent five years ago, it does fill a gap that competitors like IBM, Microsoft and others have included in their cloud data portfolios as they play catch-up with Amazon in the cloud.

AWS CEO Andy Jassy told the re:Invent audience that the Neptune graph database is intended to uncover connections in data points in a way that eludes traditional relational database. With graphs, data is stored in sets of interconnected nodes, unlike relational databases that store data in rows and columns.

Graph databases have found increasing use in online recommendation engines, as well as tasks including uncovering fraud and managing social media connections. Facebook’s Friends and Search graphs may be among the most vivid examples of use of the technology.

Jassy said graph databases, along with NoSQL key-value and document data stores, are part of a trend toward multimodel databases that support a variety of data processing methods, particularly in the cloud.

Picture of AWS CEO Andy Jassy at re:Invent 2017Andy Jassy

He said Amazon Neptune, which for now is available only as a limited preview, supports graphs based on property and semantic models — these being the two main schools of graph database construction. AWS will offer Neptune as a managed cloud service, with automatic backup to S3 over three cloud availability zones.

“People have used relational databases for everything,” he said. But such single-minded reliance on relational databases is breaking down, he contended.

This AWS graph database isn’t the company’s first foray into the technology: AWS already offers the ability to store graphs from the open source Titan graph database and its JanusGraph fork in DynamoDB tables via a back-end storage plug-in. DynamoDB is an Amazon NoSQL database for which the company claims more than 100,000 users.

Graph adept and less Graph adept

The graph data technology that has emerged in recent years comes primarily from smaller players such as Cambridge Semantics, DataStax, Franz and Neo Technologies Inc. By and large, these companies have welcomed the AWS graph database into their market, as they could signify validation of their technology niche.

People have used relational databases for everything.
Andy JassyCEO, AWS

Established relational leaders have come to include some graph support within their flagship SQL databases, and some even have rolled out stand-alone NoSQL graph databases.

AWS’ target with Neptune is the relational leaders’ flagships, which may struggle when processing ever bigger amounts of graph data, according to Doug Henschen, an analyst at Constellation Research.

Picture of Doug Henschen, Constellation ResearchDoug Henschen

“Oracle, Microsoft SQL Server and IBM DB2 have all added features for graph analysis, but SQL and extended SQL functions are not as adept as graph databases and graph query languages at exploring billions of relationships,” he said.

The AWS graph database correctly identifies an opportunity for replacing graph analysis use cases currently running on less-graph-adept commercial relational databases, Henschen said.

To Neptune, and beyond

Neptune was just one of many updates fleet Amazon added to its fast-moving cloud operation. At re:Invent, Jassy described a serverless version of Amazon Aurora database, which is now in controlled preview. It can be quickly spun up and down, and customers can pay by the second for database capacity when the database is in use, he said.

Meanwhile, Amazon’s DynamoDB is adding global table replication that ensures dependable low latency for data access across many cloud regions. Interest in such capabilities has grown along with the expansion of e-commerce across the globe.

Global replication for cloud databases was among traits heralded by Microsoft in its recent debut of Cosmos DB, as well as Oracle, in its fanfare for its upcoming Oracle 18 cloud database services.

AWS SageMaker brings machine learning to developers

LAS VEGAS — Amazon Web Services released a tool this week to empower developers to build smarter, artificial intelligence-driven applications like the AI experts.

Among the deluge of technologies introduced here at AWS re:Invent 2017, the company’s annual customer and partner event, is a tool called SageMaker. Its function is to help developers add machine learning services to applications.

Machine learning is an artificial intelligence technology that enables applications to learn without being explicitly programmed, and become smarter based on the frequency and volume of new data they ingest and analyze. Few developers are experts in machine learning, however.

SageMaker is geared to that audience. It’s a fully managed service for developers and data scientists who wish to build, train and manage their own machine learning models. Developers can choose among ten of the most common deep learning algorithms, specify their data source, and the tool installs and configures the underlying drivers and frameworks. It natively integrates with machine language frameworks such as TensorFlow and Apache MXNet and will support other frameworks as well.

Alternatively, developers can specify their own algorithm and framework.

The National Football League said it will use SageMaker to extend its next-generation stats initiative to add visualizations, stats and experiences for fans, as well as provide up-to-date information about players on the field, said Michelle McKenna-Doyle, the NFL’s senior vice president and CIO, here this week.

To supplement SageMaker, AWS created DeepLens, a wireless, deep-learning-enabled, programmable video camera for developers to hone their skills with machine learning. One example of DeepLens cited by AWS included recognizing the numbers on a license plate to trigger a home automation system and open a garage door.

AWS’ goal is to democratize access to machine learning technology for developers anywhere, so that individual developers could have access to the same technology as large enterprises, said Swami Sivasubramanian, vice president of machine learning at AWS.

SageMaker is one example of this, said Mark Nunnikhoven, vice president of cloud research at Dallas-based Trend Micro.

“I’ve worked with those technology stacks quite a lot over the last decade and there’s so much complexity …, but now any user doesn’t have to care about it,” he said. “They can do really advanced machine learning very, very easily.”

AWS ups the ante for AI

The general pattern in the market for AI application development has been twofold, said Rob Koplowitz, an analyst at Forrester Research in Cambridge, Mass. There are AI frameworks for data scientists that are extremely flexible but require special skills, and higher-level APIs that are accessible to programmers — and in some cases even non-programmers.

“Amazon wants to provide a middle ground with more flexibility,” Koplowitz said. “It’s an interesting approach and we’re looking forward to getting real work feedback from developers.”

AWS has to play catch-up here with other cloud platform companies that wish to bring machine learning to mainstream programmers. IBM provides developers access to its Watson AI services, and Microsoft has its Cognitive Services and Azure Machine Learning Workbench tools. Reducing the complexity of building machine learning models is among the more difficult areas for businesses, so this is a step in the right direction for AWS, said Judith Hurwitz, founder and CEO at Hurwitz & Associates in Needham, Mass.

Computational intelligence in general, and AI and deep learning in particular, is a hot market with a small community of experts among the biggest tech companies from Facebook to IBM.

“They all have a lot of the same core competencies, but they’re distributing them in different ways,” said Trend Micro’s Nunnikhoven.

Google tends to be more technical, while AWS now wants to make AI more accessible. Microsoft targets specific business analytics uses for AI, IBM wants to show more real-world use cases in areas such as healthcare and financial services, and Apple is looking at AI for privacy and devices. But they’re all contributing back to the same projects, such as Apache Mahout and Spark MLlib, Google’s TensorFlow, Microsoft’s Cognitive Toolkit, and others.

SageMaker should help alleviate developers’ fears that data scientists will make them into second-class citizens, but AWS may have aimed too low with SageMaker, said Holger Mueller, principal analyst at Constellation Research in San Francisco. He said he believes it’s more of a kit to empower business users to create machine learning applications.

Other AWS AI-based services

Other AI-enabled AWS services unveiled this week include Amazon Comprehend, a managed natural language processing service for documents or other textual data that integrates with other AWS services to provide analytics, and Amazon Rekognition Video, which can track people and recognize faces and objects in videos stored in Amazon S3.

There are two services now in preview — Amazon Transcribe, which lets developers turn audio files into punctuated text, and Amazon Translate, which uses neural machine translation techniques to translate text from one language to another. Translate currently supports English and six other languages — Arabic, French, German, Portuguese, Simplified Chinese and Spanish — with more languages to come in 2018.