Tag Archives: Enterprise

New Mirai variant attacks Apache Struts vulnerability

New variants of the Mirai and Gafgyt botnets are targeting unpatched enterprise devices, according to new research.

Palo Alto Networks’ Unit 42 found the variants affect vulnerabilities in Apache Struts and in SonicWall’s Global Management System (GSM). The Mirai variant exploits the same vulnerability in Apache Struts that was behind the 2018 Equifax data breach, while the Gafgyt variant exploits a newly uncovered vulnerability in unsupported, older versions of SonicWall’s GSM.

The Unit 42 research team noted the Mirai variant involves taking advantage of 16 different vulnerabilities. And while that’s not unusual, it is the first known instance of Mirai or any of its variants targeting an Apache Struts vulnerability.

The research also found the domain that hosts the Mirai samples had resolved to a different IP address in August, which also hosted Gafgyt samples at that time. Those samples exploited the SonicWall GSM vulnerability, which is tracked as CVE-2018-9866. Unit 42’s research did not say whether the two botnets were the work of a single threat group or actor, but it did say the activity could spell trouble for enterprises.

“The incorporation of exploits targeting Apache Struts and SonicWall by these IoT/Linux botnets could indicate a larger movement from consumer device targets to enterprise targets,” the Palo Alto researchers wrote.

The Apache Struts vulnerability exploited by the new Mirai variant was patched last year before it was used in the Equifax breach. But systems that have not been updated are still susceptible to these types of exploits.

The Mirai botnet first emerged in the fall of 2016, and it has since affected hundreds of thousands of IoT and connected devices. The botnet’s malware had primarily targeted consumer devices, and it was responsible for massive distributed denial-of-service attacks on the German teleco Deutsche Telekom and on the domain name server provider Dyn, which took down websites such as Airbnb, Twitter, PayPal, GitHub, Reddit, Netflix and others.

The Unit 42 researchers discovered the Gafgyt and Mirai variant on Aug. 5, and they alerted SonicWall about its GMS vulnerability. The public disclosure was posted by Palo Alto on Sept. 9.

IT experts exchange container security tips and caveats

Stay informed about the latest enterprise technology news and product updates.

Real-world container security requires users to dig in to the finer points of container, host, Kubernetes and application configurations.

BOSTON — Blue-chip IT shops have established production container orchestration deployments. Now, the question…


* remove unnecessary class from ul
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

* Replace “errorMessageInput” class with “sign-up-error-msg” class
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {

* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
return validateReturn;

* DoC pop-up window js – included in moScripts.js which is not included in responsive page
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);

is how to make them fully secure within large, multi-tenant infrastructures.

For starters, users must make changes to default settings in both Docker and Kubernetes to close potential security loopholes. For example, a default component of a container image, called docker.sock, that’s mounted without proper security tools to control its usage is vulnerable to an attack that can use it to access the host operating system and then back-end databases to exfiltrate data. Similarly, the Kubernetes API’s default setting could potentially let containers access host operating systems through a malicious pod.

“Containers also have the same problem as any other VM: They can talk to each other via an internal network,” said Jason Patterson, application security architect at NCR Corp., an Atlanta-based maker of financial transaction systems for online banking and retailers, in a presentation at the DevSecCon security conference held here this week. “That means that one misconfiguration can compromise pretty much all the containers in the environment.”

Container security configuration settings are critical

NCR uses Red Hat’s OpenShift, which restricts the Kubernetes API settings out of the box, but OpenShift users must set up security context constraints, Patterson said.

Etienne Stalmans at DevSecCon
Heroku’s Etienne Stalmans presents on container security at DevSecCon.

In general, it’s best to constrain a user’s permissions and each container’s capabilities as tightly as possible and, ideally, configure container images to whitelist only the calls and actions they’re authorized to perform — but this is still uncommon, he added.

It’s possible to limit what a container root user can do outside the container or the host on which the container runs, said Etienne Stalmans, senior security engineer at Heroku, based in San Francisco, in a separate DevSecCon presentation. To do this, container administrators can adjust settings in seccomp, an application sandboxing mechanism in the Linux kernel, and configure application permissions or capabilities.

“That still makes them a privileged user, but not outside the container,” Stalmans said. “Overall, it’s best to drop all capabilities for all container users, and then add them back in as required.”

Some highly sensitive applications require isolation provided by a hypervisor to remove any possibility that an attacker can gain host access. Vendors such as Intel, Google and Microsoft offer modified hypervisors specifically tuned for container isolation.

DevSecCon presenters also touched on tools that can be used to minimize the attack surface of container and host operating systems.

Beeline, which sells workforce management and vendor management software, uses an Oracle tool called Smith that strips out unneeded OS functions. “That shrank our Docker image sizes from as much as 65 MB to 800 KB to 2 MB,” said Jason Looney, enterprise architect at Beeline, based in Jacksonville, Fla.

Container security experts weigh host vs. API vulnerabilities

Overall, it’s best to drop all capabilities for all container users, and then add them back in as required.
Etienne Stalmanssenior security engineer, Heroku

Most of the best-known techniques in container security restrict attackers’ access to hosts and other back-end systems from a compromised container instance. But prevention of unauthorized access to APIs is critical, too, as attackers in recent high-profile attacks on AWS-based systems targeted vulnerable APIs, rather than hosts, said Sam Bisbee, chief security officer of Boston-based IT security software vendor Threat Stack, in a DevSecCon presentation.

Attackers don’t necessarily look for large amounts of data, Bisbee added. “Your security policy must cover the whole infrastructure, not just important data,” he said.

Kubernetes version 1.8 improved API security with a switch from attribute-based access control to role-based access control (RBAC). And most installers and providers of Kubernetes, including cloud container services, now have RBAC Kubernetes API access by default. But users should go further with configuration settings that prevent untrusted pods from talking to the Kubernetes API, Stalmans said.

“There is some discussion [in the Kubernetes community] to make that the default setting,” he said. It’s also possible to do this programmatically from container networking utilities, such as Calico, Istio and Weave. But “that means we’re back to firewall rules” until a new default is decided, he said.

Dig Deeper on Managing Virtual Containers

Third-party Kubernetes tools hone performance for database containerization

Enterprise IT shops that want to modernize legacy applications or embark on a database containerization project are the target users for Kubernetes tools released this month.

Robin Systems, which previously offered its own container orchestration utility, embraced Kubernetes with hyper-converged infrastructure software it claimed can optimize quality of service for database containerization, AI and machine learning applications, as well as big data applications, such as Spark and Hadoop. Turbonomic also furthered its Kubernetes optimization features with support for multi-cloud container management. Turbonomic’s Self-Managing Kubernetes tool could also help with database containerization, because it takes performance and cost optimization into account.

The products join other third-party Kubernetes management tools that must add value over features offered natively in pure upstream Kubernetes implementations — a difficult task as the container orchestration engine matures. However, the focus on database containerization and its performance challenges aligns with the enterprise market’s momentum, analysts said.

“For many vendors, performance is an afterthought, and the monitoring and management side is an afterthought,” said Milind Govekar, analyst with Gartner. “History keeps repeating itself along those lines, but now we can make mistakes faster because of automation and worse mistakes with containers, because they’re easier to spin up.”

While early adopters such as T-Mobile already use DC/OS for database containerization, most enterprises aren’t yet ready for stateful applications in containers.

“Stateless apps are still the low-hanging fruit,” said Jay Lyman, analyst with 451 Research. “It will be a slow transition for organizations pushing [containerization] into data-rich applications.”

Robin Systems claims superior database containerization approach

Now, we can make mistakes faster because of automation and worse mistakes with containers, because they’re easier to spin up.
Milind Govekaranalyst, Gartner

Robin Systems faces more of an uphill battle against both pure Kubernetes and established third-party tools with its focus on big data apps and database containerization. Mesosphere has already targeted this niche for years with DC/OS. And enterprises can also look to Red Hat OpenShift for database containerization, given the platform’s maturity and users’ familiarity with Red Hat’s products.

Robin Systems’ founders claimed better quality-of-service guarantees for individual containers and workloads than OpenShift and DC/OS, because the company designed and controls all levels of its software-defined infrastructure package, which includes network and storage management, in addition to container orchestration. It guarantees minimum and maximum application performance throughout the infrastructure — including CPU, memory, and network and storage IOPS allocations — within one policy, whereas competitors integrate with tools such as the open source Container Network Interface plug-in, OpenShift Container Storage and Portworx persistent storage.

Control over the design of the storage layer enables Robin’s platform to take cluster-wide snapshots of Kubernetes deployments and their associated applications, which isn’t possible natively on OpenShift or DC/OS yet.

Plenty of vendors claim a superior approach with their Kubernetes tools, and many major enterprise IT shops have already chosen a strategic Kubernetes vendor for production application development and deployment.

However, companies such as John Hancock also must modernize a massive portfolio of legacy applications, including IBM DB2 and Microsoft SQL Server databases in versions so old they’re no longer supported by the original manufacturers.

John Hancock, a Boston-based insurer and a division of financial services group Manulife Financial Corp., is conducting a proof of concept with Robin Systems, as it mulls database containerization for IBM DB2. The company wants to move the mainframe-based system into the Microsoft Azure cloud for development and testing, which it sees as simpler and more affordable than its current approach to managing internally developed production apps with Pivotal’s PaaS offering by a separate department.

“It’s not going to fly if [a database containerization platform] will take four people eight months to get working,” said Kurt Straube, systems director for the insurance company. Robin’s hyper-converged infrastructure approach, which bundles networking and storage with container and compute management, might be a shortcut to database containerization for legacy apps where ease of use and low cost are paramount.

Turbonomics Kubernetes tool
Turbonomic’s Kubernetes tool manages MongoDB performance.

Turbonomic targets container placement, not workloads

While Robin Systems’ platform approach puts it squarely into competition with PaaS vendors such as Red Hat OpenShift, Pivotal Container Service (PKS) and Mesosphere’s DC/OS, Turbonomic’s product spans Kubernetes platforms such as Amazon Elastic Container Service for Kubernetes, Azure Kubernetes Service, Google Kubernetes Engine and PKS.

Turbonomic’s Kubernetes tool optimizes container placement across different services, but doesn’t manage individual container workloads. It fills a potential need in the market, and it keeps Turbonomic out of direct competition with established Kubernetes tools.

“There are many PaaS vendors that can manage Kubernetes clusters, but what they can’t do is tell a user how to optimize the number of containers on a cluster so that the right resources are available to each container,” Gartner’s Govekar said.

A number of tools manage VM placement between multiple cloud infrastructure services, such as the Open Service Broker API. However, “many of these tools don’t do a great job from a performance optimization standpoint specifically,” Govekar said.

No-code and low-code tools seek ways to stand out in a crowd

As market demand for enterprise application developers continues to surge, no-code and low-code vendors seek ways to stand out from one another in an effort to lure professional and citizen developers.

For instance, last week’s Spark release of Skuid’s eponymous drag-and-drop application creation system adds on-premises, private data integration, a new Design System Studio, and new core components for tasks such as creation of buttons, forms, charts and tables.

A suite of prebuilt application templates aim to help users build and customize a bespoke application, such as salesforce automation, recruitment and applicant tracking, HR management and online learning.

And a native mobile capability enables developers to take the apps they’ve built with Skuid and deploy them on mobile devices with native functionality for iOS and Android.

Ray Wang, Constellation ResearchRay Wang

“We’re seeing a lot of folks who started in other low-code/no-code platforms move toward Skuid because of the flexibility and the ability to use it in more than one type of platform,” said Ray Wang, an analyst at Constellation Research in San Francisco.

Skuid CTO Mike DuensingMike Duensing

“People want to be able to get to templates, reuse templates and modify templates to enable them to move very quickly.”

Skuid — named for an acronym, Scalable Kit for User Interface Design — was originally an education software provider, but users’ requests to customize the software for individual workflows led to a drag-and-drop interface to configure applications. That became the Skuid platform and the company pivoted to no-code, said Mike Duensing, CTO of Skuid in Chattanooga, Tenn.

Quick Base adds Kanban reports

Quick Base Inc., in Cambridge, Mass., recently added support for Kanban reports to its no-code platform. Kanban is a scheduling system for lean and just-in-time manufacturing. The system also provides a framework for Agile development practices, so software teams can visually track and balance project demands with available capacity and ease system-level bottlenecks.

The Quick Base Kanban reports enable development teams to see where work is in process. It also lets end users interact with their work and update their status, said Mark Field, Quick Base director of products.

Users drag and drop progress cards between columns to indicate how much work has been completed on software delivery tasks to date. This lets them track project tasks through stages or priority, opportunities through sales stages, application features through development stages, team members and their task assignments and more, Field said.

Datatrend Technologies, an IT services provider in Minnetonka, Minn., uses Quick Base to build the apps that manage technology rollouts for its customers, and finds the Kanban reports handy.

A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else.
Ray Wanganalyst, Constellation Research

“Quick Base manages that whole process from intake to invoicing, where we interface with our ERP system,” said Darla Nutter, senior solutions architect at Datatrend.

Previously, we kept data of work in progress through four stages (plan, execute, complete and invoice) in a table report with no visual representation, but with these reports users can see what they have to do at any given stage and prioritize work accordingly, she said.

“You can drag and drop tasks to different columns and it automatically updates the stage for you,” she said.

Like the Quick Base no-code platform, the Kanban reports require no coding or programming experience. Datatrend’s typical Quick Base users are project managers and business analysts, Nutter said.

For most companies, however, the issue with no-code and low-code systems is how fast users can learn and then expand upon it, Constellation Research’s Wang said.

“A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else,” Wang said.

OutSystems sees AI as the future

OutSystems said it plans to add advanced artificial intelligence features into its products to increase developer productivity, said Mike Hughes, director of product marketing at OutSystems in Boston.

“We think AI can help us by suggesting next steps and anticipating what developers will be doing next as they build applications,” Hughes said.

OutSystems uses AI in its own tool set, as well as links to publicly available AI services to help organizations build AI-based products. To facilitate this, the company launched Project Turing and opened an AI Center of Excellence in Lisbon, Portugal, named after Alan Turing, who is considered the father of AI.

The company also will commit 20% of its R&D budget to AI research and partner with industry leaders and universities for research in AI and machine learning.

M-Files cloud subscription turns hybrid with M-Files Online

To reflect the desire for flexibility, and regulatory shifts in the enterprise content management industry, software vendors are starting to offer users options for storing data on premises or in a cloud infrastructure.

The M-Files cloud strategy is a response to these industry changes. The information management software vendor has released M-Files Online, which enables users to manage content both in the cloud and behind a firewall on premises, under one subscription.

While not the first ECM vendor to offer hybrid infrastructure, the company claims that with the new M-Files cloud system, it is the first ECM software provider to provide both under one software subscription.

“What I’ve seen going on is users are trying to do two things at once,” said John Mancini, chief evangelist for the Association of Intelligent Information Management (AIIM). “On one hand, there are a lot of folks that have significant investment in legacy systems. On the other hand, they’re realizing quickly that the old approaches aren’t working anymore and are driving toward modernizing the infrastructure.”

Providing customer flexibility

It’s difficult, time-consuming and expensive to migrate an organization’s entire library of archives or content from on premises to the cloud, yet it’s also the way the industry is moving as emerging technologies like AI and machine learning have to be cloud-based to be able to function. That’s where a hybrid cloud approach can help organizations handle the migration process.

Organizations need to understand that cloud is coming, more data is coming and they need to be more agile.
John Mancinichief evangelist, Association of Intelligent Information Management

According to a survey by Mancini and AIIM, and sponsored by M-Files, 48% of the 366 professionals surveyed said they are moving toward a hybrid of cloud and on-premises delivery methods for information management over the next year, with 36% saying they are moving toward cloud and 12% staying on premises.

“We still see customers that are less comfortable to moving it all to the cloud and there are certain use cases where that makes sense,” said Mika Javanainen, vice president of product marketing at M-Files. “This is the best way to provide our customers flexibility and make sure they don’t lag behind. They may still run M-Files on premises, but be using the cloud services to add intelligence to your data.”

M-Files cloud system and its new online offering act as a hub for an organization’s storehouse of information.

“The content resides where it is, but we still provide a unified UI and access to that content and the different repositories,” Javanainen said.

M-Files Online screenshot
An M-Files Online screenshot shows how the information management company brings together an organization’s content from a variety of repositories.

Moving to the cloud to use AI

While the industry is moving more toward cloud-based ECM, there are still 60% of those in the AIIM survey that want some sort of on-premises storage, according to the survey.

“There are some parts of companies that are quite happy with how they are doing things now, or may understand the benefits of cloud but are resistant to change,” said Greg Milliken, senior vice president of marketing at M-Files. “[M-Files Online] creates an opportunity that allows users that may have an important process they can’t deviate from to access information in the traditional way while allowing other groups or departments to innovate.”

One of the largest cloud drivers is to realize the benefit of emerging business technologies, particularly AI. While AI can conceivably work on premises, that venue is inherently flawed due to the inability to store enough data on premises.

M-Files cloud computing can open up the capabilities of AI for the vendor’s customers. But for organizations to benefit from AI, they need to overcome fears of the cloud, Mancini said.

“Organizations need to understand that cloud is coming, more data is coming and they need to be more agile,” he said. “They have to understand the need to plug in to AI.”

Potential problems with hybrid clouds

Having part of your business that you want more secure to run on premises and part to run in the cloud sounds good, but it can be difficult to implement, according to Mancini.

“My experience talking to people is that it’s easier said than done,” Mancini said. “Taking something designed in a complicated world and making it work in a simple, iterative cloud world is not the easiest thing to do. Vendors may say we have a cloud offering and an on-premises offering, but the real thing customers want is something seamless between all permutations.”

Regardless whether an organization is managing through a cloud or behind a firewall, there are undoubtedly dozens of other software systems — file shares, ERP, CRM — which businesses are working with and hoping to integrate its information with. The real goal of ECM vendors and those in the information management space, according to Mancini, is to get all those repositories working together.

“What you’re trying to get to is a system that is like a set of interchangeable Lego blocks,” Mancini said. “And what we have now is a mishmash of Legos, Duplos, Tinker Toys and erector sets.”

M-Files claims its data hub approach — bringing all the disparate data under one UI via an intelligent metadata layer that plugs into the other systems — succeeds at this.

“We approach this problem by not having to migrate the data — it can reside where it is and we add value by adding insights to the data with AI,” Javanainen said.

M-Files Online, which was released Aug. 21, is generally available to customers. M-Files declined to provide detailed pricing information.

Google’s OEMConfig could propel Android in business

A new initiative from Google aims to make Android more appealing to the enterprise.

Currently, enterprise mobility management (EMM) providers build different APIs into their platforms for each Android OEM’s unique features, which creates a hassle to fully support all manufacturers. With OEMConfig, the manufacturers themselves will provide the APIs in an application that EMM providers can support. That means IT pros can more easily manage and update various Android devices through their EMM, and incorporate OEM-specific features for their users.

“This looks like an enormous step forward,” said Willem Bagchus, a messaging and collaboration specialist at United Bank based in Parkersburg, W.Va. “Google is more serious about getting a deeper penetration into the business marketplace, and I look forward to it.”

What needs to change

Each Android OEM builds different features into its devices through APIs that augment what Google builds into the OS, such as capabilities that optimize bandwidth for field service workers. Android Enterprise helped expand API standards for Android in business settings, but there are still plenty of OEM-specific APIs.

That means EMM and unified endpoint management (UEM) providers must write, test and maintain different sets of code for different APIs, and repeat that process each time the OEM updates the OS. It also means the EMM provider is forced to make choices about where to dedicate its resources to support OEMs.

“This put a huge burden on the UEM providers,” said Ojas Rege, chief strategy officer at MobileIron. “The APIs wouldn’t necessarily be supported by many of the providers. The model doesn’t scale, and it takes away the manufacturer’s practical ability to differentiate.”

Some IT shops jump through hoops to manage Android in business because of the OS’ many varieties.

United Bank has used Microsoft Intune for the past two years to manage Apple iOS and Android devices. Only tech services employees get Android devices, and they’re Google phones rather than another manufacturer because Google’s own devices receive OS updates most often, Bagchus said.

“The frequency of OS updates — it’s the Wild West,” he said. “Everybody has their own flavor of Android, which is good on the one hand, but it’s hard to have a standard management approach to it.”

How OEMConfig could help

With Android Enterprise and AppConfig, EMM and UEM providers can send configurations to an application on a device. OEMConfig, which Google announced at its Android Enterprise Summit for Partners in London in May, will extend this capability.

With OEMConfig, an OEM builds its APIs into a configuration app and makes that app available in the Google Play store. EMM providers then support the OEMConfig app in their platform, and customers distribute the app to end users’ devices through the EMM. The app then configures a device to take advantage of the specific features in that OEM’s version of Android.

The more value-add a device can bring to an enterprise, the more likely they are to be bought.
Jason Baytonconsultant, CWSI

“It’s going to speed up the time to market on any new functionality,” said Jason Bayton, senior enterprise mobility consultant at CWSI based in the U.K. “We no longer have to wait on the EMM. It’s in [the OEMs’] best interest really because the more value-add a device can bring to an enterprise, the more likely they are to be bought.”

An extra benefit for IT is that the OEMConfig app can provide more consistent updates through Google Play automatically, and push new features to devices as soon as they’re available, Bayton said. IT admins can send new, vendor-specific calls to devices as soon as the OEM updates the app, without waiting for the EMM provider to build custom code, according to a Google spokesperson.

EMM providers will need to adjust their user interfaces to render OEMConfig’s more robust schema and properly display hardware management groupings for IT to configure, the spokesperson said.

The future of Android in business

OEMConfig mainly benefits smaller OEMs that don’t have support from all EMM vendors, experts said. That benefits IT at smaller businesses, which tend to have more mixed device environments than large enterprise organizations, said Eric Klein, director of mobile software at VDC Research.

“This can make EMM make a lot more sense for them because you’re going to be able to support any type of Android device,” he said. “It’s a way for Google to really make themselves a much more easily integrated platform.”

If OEMConfig simplifies EMM support and device updates, that’s a big reason for more highly regulated companies to adopt Android in business, Bagchus said.

“I think it will finally make Android devices more palatable,” he said. “We’re under a lot more scrutiny because of the regulators, which is why we had to steer clear of Android before.”

Still, Google will need OEMs and EMM providers to rally around this initiative to boost Android in business. Google has worked with hardware partner Zebra to develop the OEMConfig framework, and is “actively bringing our OEM and EMM partners together to incorporate OEMConfig into their solutions,” the Google spokesperson said, but declined to say when OEMConfig will be officially available.

EMM vendors likely will get on board in the last quarter of 2018, VDC’s Klein said.

MobileIron’s Rege said the company plans to support OEMConfig when it is available.

“It means that all these new capabilities can be supported by us without having to create custom code,” he said.

2018 MIT Sloan CIO Symposium: A SearchCIO guide


Today’s enterprise can be divided into two groups: the departments that are acquiring advanced digital capabilities and those that are lagging behind. This bifurcation of digital prowess was evident at the 2018 MIT Sloan CIO Symposium, where we asked CIOs and digital experts to expound on the factors driving digitalization at enterprises and the barriers holding them back. Not surprisingly, the departments that are customer-facing, such as marketing, are leading the digital transformation charge.

While the transition to a digitalized enterprise is happening at varied speeds for most companies, the need to develop a viable digital business mode is universally recognized. Indeed, this year’s event was all about taking action — it is no longer enough just to have a vision for digital transformation, and the conference underscored that: sessions featured leading CIOs, IT practitioners, consultants and academics from across the globe dispensing hard-won advice on methods for planning and executing a future-forward digital transformation strategy.

In this SearchCIO conference guide, experience the 2018 MIT Sloan CIO Symposium by delving into our comprehensive coverage. Topics include building an intelligent enterprise, talent recruitment, the expanding CIO role and integration of emerging technologies like AI, machine learning, cloud and more.

To view our complete collection of video interviews filmed at this year’s event, please see our video guide: “MIT CIO 2018 videos: Honing a digital leadership strategy.”

1Thriving in a digital economy

Digital transformation strategy and advice

Implementing a digital transformation strategy requires a clear set of objectives, IT-business alignment, recruitment of the right talent, self-disruption and building what experts call an “intelligent enterprise,” among other things. In this section, the pros discuss the intricacies of leading the digital transformation charge.

2Technology transformation

Utilizing emergent tech like AI, machine learning and cloud

Every digital transformation requires a future-forward vision that takes advantage of up-and-coming tools and technologies. In this section, academics and IT executives discuss the enterprise challenges, benefits, questions and wide-ranging potential that AI, machine learning, edge computing, big data and more bring to the enterprise.

3Evolving CIO role

The CIO’s ever-expanding role in a digital world

Digital transformation not only brings with it new technologies and processes, it also brings new dimensions and responsibilities to the CIO role. In this section, CIOs and IT executives detail the CIO’s place in an increasingly digital, threat-laden and customer-driven world and offer timely advice for staying on top of it all.


Interviews filmed on site

During the 2018 MIT Sloan CIO Symposium, SearchCIO staff had the pleasure of conducting several one-on-one video interviews with consultants and IT executives on the MIT campus in Cambridge, Mass. Below is a sampling of the videos.

A link to our full collection of videos filmed at the 2018 MIT Sloan CIO Symposium can be found at the top of this guide.

Driving digital transformation success with Agile

In this SearchCIO video, Bharti Airtel’s CIO Mehta, MIT Sloan CIO Leadership Award winner, explains why implementing Agile methodologies can help organizations scale their digital transformation projects.

AIOps platforms delve deeper into root cause analysis

The promise of AIOps platforms for enterprise IT pros lies in their potential to provide automated root cause analysis, and early customers have begun to use these tools to speed up problem resolution.

The city of Las Vegas needed an IT monitoring tool to replace a legacy SolarWinds deployment in early 2018 and found FixStream’s Meridian AIOps platform. The city introduced FixStream to its Oracle ERP and service-oriented architecture (SOA) environments as part of its smart city project, an initiative that will see municipal operations optimized with a combination of IoT sensors and software automation. Las Vegas is one of many U.S. cities working with AWS, IBM and other IT vendors on such projects.

FixStream’s Meridian offers an overview of how business process performance corresponds to IT infrastructure, as the city updates its systems more often and each update takes less time as part of its digital transformation, said Michael Sherwood, CIO for the city of Las Vegas.

“FixStream tells us where problems are and how to solve them, which takes the guesswork, finger-pointing and delays out of incident response,” he said. “It’s like having a new help desk department, but it’s not made up of people.”

The tool first analyzes a problem and offers insights as to the cause. It then automatically creates a ticket in the company’s ServiceNow IT service management system. ServiceNow acquired DxContinuum in 2017 and released its intellectual property as part of a similar help desk automation feature, called Agent Intelligence, in January 2018, but it’s the high-level business process view that sets FixStream apart from ServiceNow and other tools, Sherwood said.

FixStream’s Meridian AIOps platform creates topology views that illustrate the connections between parts of the IT infrastructure and how they underpin applications, along with how those applications underpin business processes. This was a crucial level of detail when a credit card payment system crashed shortly after FixStream was introduced to monitor Oracle ERP and SOA this spring.

“Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down,'” Sherwood said.

This system automatically correlated an application problem to problems with deeper layers of the IT infrastructure. The speedy diagnosis led to a fix that took the city’s IT department a few hours versus a day or two.

AIOps platform connects IT to business performance

Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down.’
Michael SherwoodCIO for the city of Las Vegas

Some IT monitoring vendors associate application performance management (APM) data with business outcomes in a way similar to FixStream. AppDynamics, for example, offers Business iQ, which associates application performance with business performance metrics and end-user experience. Dynatrace offers end-user experience monitoring and automated root cause analysis based on AI.

The differences lie in the AIOps platforms’ deployment architectures and infrastructure focus, said Nancy Gohring, an analyst with 451 Research who specializes in IT monitoring tools and wrote a white paper that analyzes FixStream’s approach.

“Dynatrace and AppDynamics use an agent on every host that collects app-level information, including code-level details,” Gohring said. “FixStream uses data collectors that are deployed once per data center, which means they are more similar to network performance monitoring tools that offer insights into network, storage and compute instead of application performance.”

FixStream integrates with both Dynatrace and AppDynamics to join its infrastructure data to the APM data those vendors collect. Its strongest differentiation is in the way it digests all that data into easily readable reports for senior IT leaders, Gohring said.

“It ties business processes and SLAs [service-level agreements] to the performance of both apps and infrastructure,” she said.

OverOps fuses IT monitoring data with code analysis

While FixStream makes connections between low-level infrastructure and overall business performance, another AIOps platform, made by OverOps, connects code changes to machine performance data. So, DevOps teams that deploy custom applications frequently can understand whether an incident is related to a code change or an infrastructure glitch.

OverOps’ eponymous software has been available for more than a year, and larger companies, such as Intuit and Comcast, have recently adopted the software. OverOps identified the root cause of a problem with Comcast’s Xfinity cable systems as related to fluctuations in remote-control batteries, said Tal Weiss, co-founder and CTO of OverOps, based in San Francisco.

OverOps uses an agent that can be deployed on containers, VMs or bare-metal servers, in public clouds or on premises. It monitors the Java Virtual Machine or Common Language Runtime interface for .NET apps. Each time code loads into the CPU via these interfaces, OverOps captures a data signature and compares it with code it’s previously seen to detect changes.

OverOps Grafana dashboard
OverOps exports reliability data to Grafana for visual display

From there, the agent produces a stream of log-like files that contain both machine data and code information, such as the number of defects and the developer team responsible for a change. The tool is primarily intended to catch errors before they reach production, but it can be used to trace the root cause of production glitches, as well.

“If an IT ops or DevOps person sees a network failure, with one click, they can see if there were code changes that precipitated it, if there’s an [Atlassian] Jira ticket associated with those changes and which developer to communicate with about the problem,” Weiss said.

In August 2018, OverOps updated its AIOps platform to feed code analysis data into broader IT ops platforms with a RESTful API and support for StatsD. Available integrations include Splunk, ELK, Dynatrace and AppDynamics. In the same update, the OverOps Extensions feature also added a serverless AWS Lambda-based framework, as well as on-premises code options, so users can create custom functions and workflows based OverOps data.

“There’s been a platform vs. best-of-breed tool discussion forever, but the market is definitely moving toward platforms — that’s where the money is,” Gohring said.

Enterprise IT struggles with DevOps for mainframe

The mainframe is like an elephant in many large enterprise data centers: It never forgets data, but it’s a large obstacle to DevOps velocity that can’t be ignored.

For a while, though, enterprises tried to leave mainframes — often the back-end nerve center for data-driven businesses, such as financial institutions — out of the DevOps equation. But DevOps for mainframe environments has become an unavoidable problem.

“At companies with core back-end mainframe systems, there are monolithic apps — sometimes 30 to 40 years old — operated with tribal knowledge,” said Ramesh Ganapathy, assistant vice president of DevOps for Mphasis, a consulting firm in New York whose clients include large banks. “Distributed systems, where new developers work in an Agile manner, consume data from the mainframe. And, ultimately, these companies aren’t able to reduce their time to market with new applications.”

Velocity, flexibility and ephemeral apps have become the norm in distributed systems, while mainframe environments remain their polar opposite: stalwart platforms with unmatched reliability, but not designed for rapid change. The obvious answer would be a migration off the mainframe, but it’s not quite so simple.

“It depends on the client appetite for risk, and affordability also matters,” Ganapathy said. “Not all apps can be modernized — at least, not quickly; any legacy mainframe modernization will go on for years.”

Mainframes are not going away. In fact, enterprises plan to increase their spending on mainframe systems. Nearly half of enterprises with mainframes expect to see their usage increase over the next two years — an 18% increase from the previous year, according to a Forrester Research Global Business Technographics Infrastructure Survey in late 2017. Only 14% expected mainframe usage to decrease, compared to 24% in the previous survey.

Whatever their long-term decision about mainframes, large enterprises now compete with nimble, disruptive startups in every industry, and that means they must find an immediate way for mainframes to address DevOps.

Bridging DevOps for mainframe gaps

Credit bureau Experian is one enterprise company that’s stuck in limbo with DevOps for mainframe environments. Its IBM z13 mainframes play a crucial role in a process called pinning, which associates credit data to individuals as part of the company’s data ingestion operations. This generates an identifier that’s more unique and reliable than Social Security numbers, and the mainframe handles the compute-intensive workload with high performance and solid reliability — the company hasn’t had a mainframe outage on any of its six mainframe instances in more than three years.

However, Experian has also embarked on a series of Agile and DevOps initiatives, and the mainframe now impedes developers that have grown accustomed to self-service and infrastructure automation in production distributed systems.

“IBM has recognized what’s happening and is making changes to [its] z/OS and z Systems,” said Barry Libenson, global CIO for Experian, based in Costa Mesa, Calif. IBM’s UrbanCode Deploy CI/CD tool, for example, supports application deployment automation on the mainframe. “But our concern is there aren’t really tools yet that allow developers to provision their own [production infrastructure], or native Chef- or Puppet-like configuration management capabilities for z/OS.”

Chef supports z Systems mainframes through integration with LinuxONE, but Experian’s most senior mainframe expert frowns on Linux in favor of z/OS, Libenson said. Puppet also offers z/OS support, but Libenson said he would prefer to get those features from native z/OS management tools.

IBM’s z Systems Development and Test Environment V11 offers some self-service capabilities for application deployment in lower environments, but Experian developers have created their own homegrown tools for production services, such as z Systems logical partitions (LPARs). The homegrown tools also monitor the utilization of LPARs, containers and VMs on the mainframe, and tools either automatically shut them off once they’re idle for a certain amount of time or alert mainframe administrators to shut them off manually.

“That’s not the way these systems are designed to behave, and it’s expensive. In commodity hardware, I have lots of options, but if I run out of horsepower on the mainframe, buying additional engines from IBM is my only choice,” Libenson said. “It’s also increasingly difficult for us to find people that understand that hardware.”

Experian is fortunate to employ a mainframe expert who doesn’t fit the stereotype of a parochial back-end admin resistant to change, Libenson said. But he’s not an infinite resource and won’t be around forever.

“I tell him, ‘If you try to retire before I do, I will kill you,'” Libenson said.

Ultimately, Experian plans to migrate away from the mainframe and has ceased product development on mainframe applications, Libenson said. He estimated the mainframe migration process will take three to five years.

DevOps for mainframe methods evolve

For some companies with larger, older mainframes, even a multiyear mainframe migration is expensive.

If you don’t have a good reason to get off the mainframe platform, there are ways to do a lot of DevOps-specific features.
Christopher Gardneranalyst, Forrester Research

“One insurance firm client told me it would cost his company $30 million,” said Christopher Gardner, an analyst at Forrester Research. “If you don’t have a good reason to get off the mainframe platform, there are ways to do a lot of DevOps-specific features.”

Mainframe vendors, such as CA, IBM and Compuware, have tools that push DevOps for mainframe closer to an everyday reality. IBM’s UrbanCode Deploy agents offer application deployment automation and orchestration workflows for DevOps teams that work with mainframes. The company also recently added support for code deployments to z Systems from Git repositories and offers a z/OS connector for Jenkins CI/CD, as well. In addition to Jenkins, CI/CD tools from Electric Cloud and XebiaLabs support mainframe application deployments.

CA offers mainframe AIOps support in its Mainframe Operational Intelligence tool. And in June 2018, it introduced a SaaS tool, Mainframe Resource Intelligence, which scans mainframe environments and offers optimization recommendations. Compuware has tools for faster updates and provisioning on mainframes and hopes to lead customers to mainframe modernization by example; it underwent its own DevOps transformation over the last four years.

Vendors and experts in the field agree the biggest hurdle to DevOps for mainframe environments is cultural — a replay of cultural clashes between developers and IT operations, on steroids.

Participation from mainframe experts in software development strategy is crucial, Ganapathy said. His clients have cross-functional teams that decide how to standardize DevOps practices across infrastructure platforms, from public cloud to back-end mainframe.

“That’s where mainframe knowledge has the greatest value and can play a role at the enterprise level,” Ganapathy said. “It’s important to give mainframe experts a better say than being confined to a specific business unit.”

Mainframes may never operate with the speed and agility of distributed systems, but velocity is only one metric to measure DevOps efficiency, Forrester’s Gardner said.

“Quality and culture are also part of DevOps, as are continuous feedback loops,” he said. “If you’re releasing bugs faster, or you’re overworking your team and experiencing a lot of employee turnover, you’re still not doing your job in DevOps.”

Western Digital launches 15 TB enterprise SAS SSD

Ultrafast, NVMe-based PCI Express solid-state drives may represent the future of enterprise storage technology, but drive vendors still see a future in SAS SSDs.

Western Digital this week said it is shipping samples of its highest-density enterprise SAS SSD to OEMs. The new 2.5-inch Ultrastar DC SS530 can store up to 15.36 TB, doubling the capacity over the prior SS300 model, thanks to denser 64-layer 3D NAND flash that can help lower costs. The SS530 is due to be production-ready at the end of this quarter.

“There’s a lot of excitement around NVMe [nonvolatile memory express]. But SAS is still a very trusted, reliable interface. It’s an interface that lets you mix HDDs and SSDs,” said Eddie Ramirez, senior director of product management for enterprise SSDs at Western Digital, based in San Jose, Calif.

SAS SSD demand remains strong in enterprise storage arrays, and on the server side, growth is coming from hyper-converged infrastructure that combines compute, storage and virtualization resources in the same box, Ramirez said. SAS SSDs might serve as the caching layer in front of SAS HDDs, he added.

“For things like hyper-converged infrastructure systems, it’s still a very effective interface, particularly if you want to do hybrid arrays where you’re using both SSDs and HDDs within the same server,” he said.

The Ultrastar DC SS530 enterprise SAS SSD uses 64-layer 3D NAND flash that stores 3 bits of data per cell, known as triple-level cell (TLC). The prior model, SS300, used 32-layer 3D NAND, with TLC in a lower-endurance model and multi-level cell flash that stores 2 bits of data per cell in the higher-endurance options.

Ramirez said Western Digital can use TLC for all of its one-drive, three-drive and 10-drive writes per day (DWPD) SKUs. The one DWPD model targets read-intensive workloads, and the higher-endurance options are designed for caching and write-intensive workloads.

New SAS SSD boosts performance

Western Digital and Intel jointly developed the Ultrastar DC SS530. It improves random write performance by 60% over the prior model, delivering up to 320,000 IOPS in the 10 DWPD SKU. Random read performance is up by 10% to a maximum of 440,000 IOPS.

The new enterprise SAS SSD supports a data transfer rate of 12 Gbps. Western Digital’s roadmap calls for 24 Gbps enterprise SAS SSDs, but Ramirez declined to disclose the timetable.

Samsung’s 31 TB SAS SSD

The Ultrastar DC SS530 is Western Digital’s first SSD with a higher capacity than any of its HDDs. The company’s highest-capacity enterprise HDD is 14 TB. The SS530 is available at capacities ranging from 400 GB to 15.36 TB. But Samsung has Western Digital beat for capacity with its 30.72 TB SAS drive that uses 64-layer TLC 3D NAND flash, which the vendor calls V-NAND.

Ramirez said Western Digital will consider a 31 TB drive in its next-generation product, but “we don’t quite see the market adoption at that high a capacity at this point.”

Western Digital’s new Ultrastar DC SS530 enterprise SAS SSD is dual-ported, in contrast to the single-ported 12 Gbps SAS SSD that rival Toshiba introduced in June. Toshiba’s RM5 SAS SSD is designed as a SATA replacement and targets server-based applications, including software-defined storage and hyper-converged infrastructure. Toshiba claimed the single-port SAS SSD is close in price to SATA SSDs, which max out at 6 Gbps and are typically less expensive than SATA SSDs.

The Western Digital Ultrastar DC SS530 SSD can run in single-port mode, but dual-port SSDs provide redundancy and a performance boost, in some cases, Ramirez said.

“Currently, we feel the market in SAS is still very much looking at dual-ported capability,” Ramirez said. “Typically, in a SAS storage array, you’re using an HBA [host bus adapter] to talk to multiple drives, and you don’t want that HBA to be a single point of failure.”

Ramirez declined to disclose pricing for the SS530 other than to say there’s no difference in the cost per gigabyte in comparison to the prior SS300 model.

Enterprise SAS SSD market

Western Digital is third in the enterprise SAS market, behind Samsung and Toshiba in units and exabytes shipped, according to storage research firm Trendfocus.

Don Jeanette, a vice president at Trendfocus, based in Cupertino, Calif., said Toshiba leads the market, with more than 40% of the units shipped, followed by Samsung with 30% and Western Digital with 23%. Samsung leads in exabytes shipped, with 49% of the market, trailed by Toshiba at 35% and Western Digital with 12%.

Enterprise SAS SSD market share
Enterprise SAS SSD units that shipped increased 8% year over year, and exabytes shipped increased 15% in the first quarter of 2018.

Western Digital had close to half of the market three years ago in units and exabytes, Jeanette said. Samsung came on strong with an aggressive roadmap to pass Western Digital, he added. Western Digital entered the SAS SSD market with the acquisition of Hitachi Global Storage Technologies in 2012.

Jeanette said there’s still a strong market for SAS SSDs, which have more than double the average capacity of PCIe and SATA SSDs.

“There are very few SSD vendors supporting SAS,” Jeanette said. “Everyone’s trying to move to PCIe. SATA’s a legacy protocol out there. But for the ones that did enter SAS a decade ago, they’re going to find that they will have healthy business for a number of years to come.”