Tag Archives: Infrastructure

Transforming IT infrastructure and operations to drive digital business

It’s time for organizations to modernize their IT infrastructure and operations to not just support, but to drive digital business, according to Gregory Murray, research director at Gartner.

But to complete that transformation, organizations need to first understand their desired future state, he added.

“The future state for the vast majority of organizations is going to be a blend of cloud, on prem and off prem,” Murray told the audience at the recent Gartner Catalyst conference. “What’s driving this is the opposing forces of speed and control.”

From 2016 to 2024, the percentage of new workloads that will be deployed through on-premises data centers is going to plummet from about 80% to less than 20%, Gartner predicts. During the same period, cloud adoption will explode — going from less than 10% to as much as 45% — with off-premises, colocation and managed hosting facilities also picking up more workloads.

IT infrastructure needs to provide capabilities across these platforms, and operations must tackle the management challenges that come with it, Murray said.

How to transform IT infrastructure and operations

Once organizations have defined their future state — and Murray urged organizations to start with developing a public cloud strategy to determine which applications will be in the cloud — they should begin modernizing their infrastructure, he told the audience at the Gartner Catalyst conference. 

“Programmatic control is the key to enabling automation and automation is, of course, critical to addressing the disparity between the speed that we can deliver and execute in cloud, and improving our speed of execution on prem,” he said. 

Organizations will also need developers with the skills to take advantage of it, he said. Another piece of the automation equation when modernizing the infrastructure to gain speed is standardization, he said.

The future state for the vast majority of organizations is going to be a blend of cloud, on prem and off prem.
Gregory Murrayresearch director, Gartner

“We need to standardize around those programmatic building blocks, either by using individual components of software-defined networking, software-defined compute and software-defined storage, or by using a hyper-converged system.”

Hyper-converged simplifies the complexity associated with establishing programmatic control and helps create a unified API for infrastructure, he said.

Organizations also need to consider how to uplevel their standardization, according to Murray. This is where containers come into play. The atomic unit of deployment is specific to an application and it abstracts much of the dependencies and complications that come with moving an application independent of its operating system, he explained.

“And if we can do that, now I have a construct that I can standardize around and deploy into cloud, into on prem, into off prem and give it straight to my developers and give them the ability to move quickly and deploy their applications,” he said.

Hybrid is the new normal

To embrace this hybrid environment, Murray said organizations should establish a fundamental substrate to unify these environments.

“The two pieces that are so fundamental that they precede any sort of hybrid integration is the concept of networks — specifically your WAN and WAN strategy across your providers — and identity,” Murray said. “If I don’t have fundamental identity constructs, governance will be impossible.”

Organizations looking to modernize their network for hybrid capabilities should resort to SD-WAN, Murray said. This provides software-defined control that extends outside of the data center and allows a programmatic approach and automation around their WAN connectivity to help keep that hybrid environment working together, he explained.

But to get that framework of governance in place across this hybrid environment requires a layered approach, Murray said. “It’s a combination of establishing principles, publishing the policies and using programmatic controls to bring as much cloud governance as we can.”

Murray also hinted that embracing DevOps is the first step in “a series of cultural changes” that organizations are going to need to truly modernize IT infrastructure and operations. For those who aren’t operating at agile speed, operations still needs to get out of the business of managing tickets and delivering resources and get to a self-service environment where operations and IT are involved in brokering the services, he added.

There is also need to have a monitoring framework in place to gain visibility across the environment. Embracing AIOps — which uses big data, data analytics and machine learning — can help organizations become more predictive and more proactive with their operations, he added.

Dell EMC HCI and storage cloud plans on display at VMworld

LAS VEGAS — Dell EMC launched cloud-related enhancements to its storage and hyper-converged infrastructure products today at the start of VMworld 2018.

The Dell EMC HCI and storage product launch includes a new VxRail hyper-converged appliance, which uses VMware vSAN software. The vendor also added a cloud version of the Unity midrange unified storage array and cloud enhancements to the Data Domain data deduplication platform.

Dell EMC HCI key for multi-cloud approach?

Dell EMC is also promising synchronized releases between the VxRail and the VMware vSAN software that turns the PowerEdge into an HCI system – although it could take 30 days for the “synchronization.” Still, that’s an improvement over the six months or so it now takes for the latest vSAN release to make it to VxRail.

Whether you’re protecting data or storing data, the learning curve of your operating model — regardless of whether you’re on premises or off premises — should be zero.
Sam Grocottsenior vice president of marketing, ISG, Dell EMC

Like other vendors, Dell EMC considers its HCI a key building block for private and hybrid clouds. The ability to offer private clouds with public cloud functionality is becoming an underpinning of the multi-cloud strategies at some organizations.

Sam Grocott, senior vice president of marketing for the Dell EMC infrastructure solutions group, said the strong multi-cloud flavor of the VMworld product launches reflects conversations the vendor has with its customers.

“As we talk to customers, the conversation quickly turns to what we are doing in the cloud,” Grocott said. “Customers talk about how they’re evaluating multiple cloud vendors. The reality is, they aren’t just picking one cloud, they’re picking two or even three clouds in a lot of cases. Not all your eggs will be in one basket.”

Dell EMC isn’t the only storage vendor making its storage more cloud-friendly. Its main storage rival NetApp also offers its unified primary storage and backup options that run in the cloud, and many startups focus on cloud compatibility and multi-cloud management from the start.

Grocott said Dell’s overall multi-cloud strategy is to provide a consistent operating model experience on premises, as well as in private and public clouds. That strategy covers Dell EMC and VMware products. Dell EMC VxRail is among the products that tightly integrates VMware with the vendor’s storage.

“That’s what we think is going to differentiate us from any of the competition out there,” he said. “Whether you’re protecting data or storing data, the learning curve of your operating model — regardless of whether you’re on premises or off premises — should be zero.”

Stu Miniman, a principal analyst at IT research firm Wikibon, said Dell EMC is moving toward what Wikibon calls a True Private Cloud.

Wikibon’s 2018 True Private Cloud report predicts almost all enterprise IT will move to a hybrid cloud model dominated by SaaS and true private cloud. Wikibon defines true private cloud as completely integrating all aspects of a public cloud, including a single point of contact for purchase, support, maintenance and upgrades.

“The new version of the private cloud is, let’s start with the operating model I have in the public cloud, and that’s how I should be able to consume it, bill it and manage it,” Miniman said. “It’s about the software, it’s about the usability, it’s about the management layer. Step one is to modernize the platform; step two is to modernize the apps. It’s taken a couple of years to move along that spectrum.”

HiveIO seeks to create buzz in HCI market

Newcomer HiveIO Inc. is trying to make it in the already crowded hyper-converged infrastructure market by touting a software-only application that it claims uses AI for resource management.

HiveIO this week released Hive Fabric 7.0, its hyper-converged application. The vendor, based in Hoboken, N.J., has actually been around since 2015 and shipped its first version of Hive Fabric that same year, but has kept a low profile until now. HiveIO’s co-founders Kevin McNamara and Ofer Bezalel came out of JP Morgan Chase’s engineering team. HiveIO CTO McNamara said the goal was to create an infrastructure that consisted of one platform, was simple to use and was inexpensive.

“They thought about a single product, single vendor, hyper-converged fabric out of the box that just deploys and just works and reduces the complexity of the data center,” said HiveIO CEO Dan Newton, who joined HiveIO last April from Rackspace. “Our team comes from an operational background, and we’re focused on making our product operationally very easy, yet very stable. We try to make the technology work for the customers. We don’t want the customers to have to work to make it work.”

Newton said HiveIO has about 400 customers, including those it picked up by acquiring the assets of HCI software vendor Atlantis Computing in July 2017. HiveIO also inherited Atlantis’ OEM deal with Lenovo, which packaged Atlantis’ HCI software on its servers. However, HiveIO has no other hardware partnerships for Hive Fabric.

Newton said the goal is to provide HCI software that can deploy in 20 minutes on three nodes and requires little training to use.

We put the Message Bus into appliances and use machine learning to manage the appliances.
Kevin McNamaraCTO, HiveIO

HiveIO describes Hive Fabric as a “zero-layer, hardware-agnostic” hyper-converged platform that runs on any x86 server or in the cloud. Hive Fabric includes a free kernel-based virtual machine hypervisor, although it can also run with VMware and Microsoft hypervisors. Hive Fabric manages storage, compute, virtualization and networking across HCI clusters through its Message Bus. It includes a REST API and Universal Rest Interface to support third-party and customer applications.

McNamara called the artificial intelligence-driven Hive Fabric Message Bus “unique to the industry.” he said the Message Bus relies on AI and metadata to format data in real-time and provide predictive analytics to prevent potential performance and capacity problems.

“It’s all integrated into the stack,” McNamara said. “We can see everything in the hardware, everything in the stack, everything in the guest server and everything in the application layers. We put the Message Bus into appliances and use machine learning to manage the appliances. You can move workloads across appliances.”

Newton added, “Every piece of data point all comes through the Message Bus.”

HiveIO released Hive Fabric 7.0 this week, simplifying resource management through a Cluster Resource Scheduler (CRS). The CRS uses AI to monitor resource allocation across the cluster, and moves guest virtual machines between servers to improve operational efficiency. Hive Fabric 7.0 also allows customers to run multiple mixed-application workloads.

Hive Fabric 7 from HiveIO
HiveIO’s Hive Fabric 7 management dashboard.

Forrester Research senior analyst Naveen Chhabra said HiveIO will need to prove its AI capabilities to make it in an HCI field that includes at least 15 vendors.

“A number of companies already have proven technology — including Nutanix, Cisco, Dell EMC, VMWare,” Chhabra said. “HiveIO can do the same, but they must deliver at least table stakes technology, and then find out what innovations they can come up with. They talk about the interconnect fabric with artificial intelligence. It’s a transport layer for sending bits and bytes from one node to another. What kind of artificial intelligence does it have? Is it artificial intelligence or just AI washing like you hear from other vendors? And they have to find a strong use case for that artificial intelligence, even if it’s just one use case.”

HiveIO executives claim their early customers’ workloads include general server virtualization, virtual desktops, databases, log analysis and test/dev.

Hive Fabric is sold as a monthly subscription based on the number of physical servers with no restrictions on memory, storage or cores.

HiveIO promises to support Atlantis Computing hyper-converged and virtual desktop infrastructure software through 2022. Newton said HiveIO will offer Atlantis customers an upgrade path to Hive Fabric. He said HiveIO hired some Atlantis employees but is not using its technology in Hive Fabric.

HiveIO has 30 employees in the U.S. and U.K. It has completed two funding rounds and lists El Dorado Ventures, Rally Ventures, Osage Venture Partners and Citrix as investors but does not disclose its total funding.

IBM DS8882F converges array and mainframe in one rack

Talk about converged infrastructure — IBM just embedded an all-flash array inside mainframe server racks.

IBM today launched a rack-mounted IBM DS8882F array for IBM Z ZR1 and LinuxOne Rockhopper II “skinny” mainframes that rolled out earlier in 2018. The 16U DS8882F is the smallest of IBM’s high-end DS8880 enterprise storage family designed for mainframes. The new mainframes install in a standard 19-inch rack. The IBM DS8882F array inserts into the same rack and scales from 6.4 TB to 368.64 TB of raw capacity.

The IBM DS8882F is part of a large IBM storage rollout that features mostly software and cloud storage updates, including the following:

  • IBM Spectrum Protect1.6 data protection software now supports automatic tiering to object storage and ransomware protection for hypervisor workloads. The software generates email warnings pointing to where an infection may have occurred. Spectrum Protect supports Amazon Web Services, IBM Cloud and Microsoft Azure.
  • IBM Spectrum Protect Plus1.2 virtual backup now supports on-premises IBM Cloud Object Storage, IBM Cloud and AWS S3. It also supports VMware vSphere 6.7, encryption of vSnap repositories, and IBM Db2 databases.
  • IBM Spectrum Scale0.2 added file audit logging, a watch folder and other security enhancements, along with a GUI and automated recovery features. Spectrum Scale on AWS now enables customers to use their own AWS license and supports a single file system across AWS images.
  • The IBM DS8880 platform supports IBM Cloud Object Storage and automatically encrypts data before sending it to the cloud.

The products are part of IBM’s third large storage rollout this year. It added an NVMe FlashSystem 9100 and Spectrum software in July, and cloud-based analytics and block-based deduplication in May.

Steve McDowell, senior technology analyst at Moor Insights & Strategy, said IBM has become the most aggressive of the large storage vendors when it comes to product delivery.

“IBM storage is marching to a cadence and putting out more new products faster than its competitors,” McDowell said. “We’re seeing announcements every quarter, and their products are extremely competitive.”

IBM ended a string of 22 straight quarters of declining storage revenue in early 2017 and put together four quarters of growth until declining again in the first quarter of 2018. IBM’s storage focus has been around its Spectrum software family and all-flash arrays.

IBM’s focus on footprint

McDowell called the IBM DS8882F “a nice piece of hardware.” “The zSeries is moving towards a more standard rack, and this fits right in there with almost 400 TB of raw capacity in a 19-inch rack,” he said. “It’s about capacity density and saving floor space. If I can put a zSeries and a rackmount of storage unit side by side, it makes a nice footprint in my data center.”

“The days of an EMC VMAX spanning across your data center are gone. With flash, it’s how many terabytes or petabytes I can put into half a rack and then co-locate all of that with my servers.”

Eric Herzog, chief marketing officer for IBM storage, said reducing the footprint was the main driver of the array-in-the-mainframe.

“We created a mini-array that literally screws into the same 19-inch mainframe rack,” Herzog said. “This frees up rack space and floor space, and gives you a smaller, lower-cost entry point.”

Competing in a crowded market

IBM’s DS8880 series competes with the Dell EMC PowerMax — the latest version of the VMAX — and the Hitachi Vantara Virtual Storage Platform as mainframe storage platforms.

IBM storage revenue rebounded to grow in the second quarter this year, but the market remains crowded.

IBM’s Herzog said the storage market “is fiercely competitive in all areas, including software. It’s a dog-eat-dog battle out there. Software is just as dog-eat-dog as the array business now, which is unusual.”

The new products are expected to ship by the end of September.

Agile Networks and Microsoft announce agreement to deliver broadband internet access to rural communities in Ohio – Stories

The agreement will leverage underutilized infrastructure in counties across the state, bringing high-speed internet access to 110,000 people in rural areas without broadband

CANTON, OH (AUGUST 8, 2018) – Today, Agile Networks, a leading provider of telecommunications solutions, and Microsoft Corp. announced a new agreement to bring broadband internet access to rural areas in Ohio, reaching 110,000 currently unserved people and greatly expanding access in underserved rural areas. The partnership is part of the Microsoft Airband Initiative, which is focused on closing the broadband gap by extending broadband access to 2 million unserved people in rural America by 2022.

This partnership leverages Agile’s robust network of telecommunications infrastructure throughout the state and cutting-edge technology, including TV white spaces, to provide more people living in rural Ohio with access to broadband internet over the next four years.

“People across the state, no matter where they choose to live, work and send their children to school, should have the same access to strong, reliable broadband service,” said Kyle Quillen, Agile Networks Founder and CEO. “This partnership will have an impact on more than 900,000 people across the state of Ohio, of whom 110,000 completely lack access to broadband. We’re excited to partner with Microsoft as part of this national initiative to ensure everyone has access to the information they need, when they need it.”

“In today’s digital economy, broadband access has become a necessity across industries including healthcare, agriculture, business and education,” said Shelley McKinley, Microsoft’s head of Technology and Corporate Responsibility. “Our partnership with Agile will help deliver broadband internet access to rural communities across Ohio so that they can take advantage of today’s and tomorrow’s opportunities and the latest cloud technologies.”

Across Ohio, there are critical functions in need of reliable, high-speed connectivity, including medical clinics and rural hospitals, schools, oil and gas wells, agriculture operations, and households. By equipping its towers with innovative TV white spaces equipment, Agile’s efforts, in partnership with Microsoft, will enhance public safety interoperability across the state of Ohio, while providing competitive, affordable broadband access options to rural consumers and businesses, as well as turnkey solution sets tailored to fixed and mobile wireless carriers. As a result, this project will serve as a catalyst for economic development and rural broadband deployment in Ohio.

The Microsoft Airband Initiative is focused on bringing broadband coverage to rural Americans through commercial partnerships and investment in digital skills training for people in the newly connected communities. Proceeds from Airband connectivity projects will be reinvested into the program to expand broadband to more rural areas.

About Agile Networks

Agile Networks is the premier provider of hybrid fiber wireless broadband data networks, supplying connectivity to empower individuals and transform organizations. Agile Networks’ hybrid network – The Agile Network – utilizes vertical infrastructure along with the latest in fiber-optic and wireless technologies to provide world-class data solutions. Engineered to the stringent specifications required to support public safety, The Agile Network boasts carrier grade performance and military-grade security. Agile’s Last-Mile Agility makes delivering solutions to rural areas just as feasible as major cities

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Chris Herbert  614-448-8703  chrish@pendulumstrategygroup.com
Evan Weese  614-282-9822    ezweese@gmail.com


Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,


Cisco lays groundwork for augmented reality in Cisco Webex app

An overhaul of the back-end infrastructure and user interface of the Cisco Webex app, rolling out this month, lays the groundwork for the vendor to expand support for augmented reality, virtual reality and other advanced video-centric technologies.

The redesign, which will be released throughout August, prioritizes video and simplifies scheduling, calendar management and in-meeting controls. Beyond that, the vendor has enhanced the cloud infrastructure that powers the video conferencing platform.

The announcement is the result of years of platform work that will allow the Cisco Webex app to better use the public cloud in conjunction with its private cloud video infrastructure, said Sri Srinivasan, vice president and general manager of the vendor’s team collaboration group.

“We’re putting the plumbing together for intelligent experiences across the board,” Srinivasan said. “I don’t think we’re ready to talk about everything AR/VR [augmented reality and virtual reality] on Webex yet, but think of it as the base plumbing.”

In April, Cisco announced that Apple iOS users would be able to share augmented reality files during meetings within the Cisco Webex app. A team of architects could use the feature to view — and edit in real time — a three-dimensional blueprint of a building they were designing, for example.

Cisco also recently began a beta partnership with startup Atheer Inc. to let Webex customers use that vendor’s AR platform, which is compatible with AR smart glasses from vendors such as Microsoft and Toshiba.

A field worker wearing smart glasses could use Atheer’s software to share a video feed of his or her current view to a meeting within the Cisco Webex app. Team members could then upload documents or drawings to the worker’s smart glasses to help solve a problem.

Cisco has been at the vanguard of combining immersive technologies with collaboration apps, analysts said. Microsoft has also taken steps to add AR to its collaboration portfolio. This spring, Microsoft released previews of two new AR apps for Microsoft HoloLens that integrate with Microsoft Teams.

“Microsoft, with HoloLens, is quite prominent these days, and they have a set of specialized applications,” said Adam Preset, analyst at Gartner. “Cisco will have opened up options to do the same with the Atheer partnership, but they’ll also have brought AR into a common application people use every day in Webex.”

Augmented reality use cases limited, but expanding

So far, augmented reality has seen the most adoption in the fields of healthcare, oil and gas production, and manufacturing, said J.P. Gownder, vice president and principal analyst at Forrester Research. But the technology would be useful in any vertical with a high proportion of field workers and significant visualization needs, he said.

By 2019, 20% of large enterprises are expected to have evaluated and adopted augmented reality, virtual reality or mixed reality technology, according to projections by Gartner. Field services, logistics, training and analytics are the most common uses cases in the enterprise market at this point, according to the firm.

Immersive commerce could soon become a typical use case of augmented reality, said Marty Resnick, analyst at Gartner. Customer service agents could use AR tools to help customers fix a problem they are having at home with a product.

IDC predicted global spending on augmented and virtual reality technologies will grow at a compound annual rate of 71.6% between 2017 and 2022. Consumers will drive most of that growth, but the verticals of retail, transportation and manufacturing are also expected to ramp up investments in such products.

“Expect more consumer and business applications to leverage AR. And within seven years, it will just be another part of the conference, marketing and business collaboration stack,” said Wayne Kurtzman, analyst at IDC.

The case for cloud storage as a service at Partners

Partners HealthCare relies on its enterprise research infrastructure and services group, or ERIS, to provide an essential service: storing, securing and enabling access to the data files that researchers need to do their work.

To do that, ERIS stood up a large network providing up to 50 TB of storage, so the research departments could consolidate their network drives, while also managing access to those files based on a permission system.

But researchers were contending with growing demands to better secure data and track access, said Brent Richter, director of ERIS at the nonprofit Boston-based healthcare system. Federal regulations and state laws, as well as standards and requirements imposed by the companies and institutions working with Partners, required increasing amounts of access controls, auditing capabilities and security layers.

That put pressure on ERIS to devise a system that could better meet those heightened healthcare privacy and security requirements.

“We were thinking about how do we get audit controls, full backup and high availability built into a file storage system that can be used at the endpoint and that still carries the nested permissions that can be shared across the workgroups within our firewall,” he explained.

Hybrid cloud storage as a service

At the time, ERIS was devising security plans based on the various requirements established by the different contracts and research projects, filling out paperwork to document those plans and performing time-intensive audits.

It was then that ERIS explored ClearSky Data. The cloud-storage-as-a-service provider was already being used by another IT unit within Partners for block storage; ERIS decided six months ago to pilot the ClearSky Data platform.

“They’re delivering a network service in our data center that’s relatively small; it has very fast storage inside of it that provides that cache, or staging area, for files that our users are mapping to their endpoints,” Richter explained.

From there, automation and software systems from ClearSky Data take those files and move them to its local data center, which is in Boston. “It replicates the data there, and it also keeps the server in our data center light. [ClearSky Data] has all the files on it, but not all the data in the files on it; it keeps what our users need when they’re using it.”

Essentially, ClearSky Data delivers on-demand primary storage, off-site backup and disaster recovery as a single service, he said.

All this, however, is invisible to the end users, he added. The researchers accessing data stored on the ClearSky Data platform, as well as the one built by ERIS, do not notice the differences in the technologies as they go about their usual work.

ClearSky benefits for Partners

ERIS’ decision to move to ClearSky Data’s fully managed service delivered several specific benefits, Richter said.

He said the new approach reduced the system’s on-premises storage footprint, while accelerating a hybrid cloud strategy. It delivered high performance, as well as more automated security and privacy controls. And it offered more data protection and disaster recovery capabilities, as well as more agility and elasticity.

Richter said buying the capabilities also helped ERIS to stay focused on its mission of delivering the technologies that enable the researchers.

“We could design and engineer something ourselves, but at the end of the day, we’re service providers. We want to provide our service with all the needed security so our users would just be able to leverage it, so they wouldn’t have to figure out whether it met the requirements on this contract or another,” Richter said.

He noted, too, that the decision to go with a hybrid cloud storage-as-a-service approach allowed ERIS to focus on activities that differentiate the Partners research community, such as supporting its data science efforts.

“It allows us to focus on our mission, which is providing IT products and services that enable discovery and research,” he added.

Pros and cons of IaaS platform

Partners’ storage-as-a-service strategy fits into the broader IaaS market, which has traditionally been broken into two parts: compute and storage, said Naveen Chhabra, a senior analyst serving infrastructure and operations professionals at Forrester Research Inc.

[Cloud storage as a service] allows us to focus on our mission, which is providing IT products and services that enable discovery and research.
Brent Richterdirector of ERIS at Partners HealthCare

In that light, ClearSky Data is one of many providers offering not just cloud storage, but the other infrastructure layers — and, indeed, the whole ecosystem — needed by enterprise IT departments, with AWS, IBM and Google being among the biggest vendors in the space, Chhabra said.

As for the cloud-storage-as-a-service approach adopted by Partners, Chhabra said it can offer enterprise IT departments flexibility, scalability and faster time to market — the benefits that traditionally come with cloud. Additionally, it can help enterprise IT move more of their workloads to the cloud.

There are potential drawbacks in a hybrid cloud storage-as-a-service setup, however, Chhabra said. Applying and enforcing access management policies in an environment where there are both on-premises and IaaS platforms can be challenging for IT, especially as deployment size grows. And while implementation of cloud-storage-as-a-service platforms, as well as IaaS in general, isn’t particularly challenging from a technology standpoint, the movement of applications on the new platform may not be as seamless or frictionless as promoted.

“The storage may not be as easily consumable by on-prem applications. [For example,] if you have an application running on-prem and it tries to consume the storage, there could be an integration challenge because of different standards,” he said.

IaaS may also be more expensive than keeping everything on premises, he said, adding that the higher costs aren’t usually significant enough to outweigh the benefits. “It may be fractionally costlier, and the customer may care about it, but not that much,” he said.

Competitive advantage

ERIS’ pilot phase with ClearSky Data involves standing up a Linux-based file service, as well as a Windows-based file service.

Because ERIS uses a chargeback system, Richter said the research groups his team serves can opt to use the older internal system — slightly less expensive — or they can opt to use ClearSky Data’s infrastructure.

“For those groups that have these contracts with much higher data and security controls than our system can provide, they now have an option that fulfills that need,” Richter said.

That itself provides Partners a boost in the competitive research market, he added.

“For our internal customers who have these contracts, they then won’t have to spend a month auditing their own systems to comply with an external auditor that these companies bring as part of the sponsored research before you even get the contract,” Richter said. “A lot of these departments are audited to make sure they have a base level [of security and compliance], which is quite high. So, if you have that in place already, that gives you a competitive advantage.”

HPE aims new SimpliVity HCI at edge computing

Hewlett Packard Enterprise has introduced a compact hyper-converged infrastructure system destined for running IoT applications at the network’s edge.

HPE unveiled the SimpliVity 2600 this week, calling the device the “first software-optimized offering” in the SimpliVity HCI line. The 2U system is initially built to run a virtual desktop system, but its size and computing power makes it “ideal for edge computing applications,” said Lee Doyle, the principal analyst at Doyle Research, based in Wellesley, Mass.

Thomas Goepel, the director of product management at HPE, said the company would eventually market the SimpliVity 2600 for IoT and general-purpose applications that require a smallish system with a dense virtualized environment.

As virtual desktop infrastructure, the SimpliVity 2600 provides a scale-out architecture that lets companies increase compute, memory and storage as needed. The system also has a built-in backup and disaster recovery for desktop operations.

Intel Xeon processors with 22 cores each power the SimpliVity 2600, which supports up to 768 GB of memory. Hardware features include a redundant power supply, hot-pluggable solid-state drives, cluster expansion without downtime and an integrated storage controller with a battery-backed cache. The system also has a 10 GbE network interface card.

HPE’s planned Plexxi integration

HPE’s SimpliVity HCI portfolio stems from last year’s $650 million purchase of HCI vendor SimpliVity Corp. The acquired company’s technology for data deduplication and compression was a significant attraction for HPE, analysts said.

HPE has said it will eventually incorporate in its SimpliVity HCI systems the hyper-converged networking (HCN) technology of Plexxi. HPE announced its acquisition of Plexxi in May but did not disclose financial details.

“[An] HPE SimpliVity with Plexxi solution is on the roadmap,” Goepel said. He did not provide a timetable.

Plexxi’s HCN software enables a software-based networking fabric that runs on Broadcom-powered white box switches. Companies can use VMware’s vCenter dashboard to orchestrate virtual machines in a Plexxi HCI system. Plexxi software can also detect and monitor VMware NSX components attached to the fabric.

IT infrastructure management software learns analytics tricks

IT infrastructure management software has taken on a distinctly analytical flavor, as enterprise IT pros struggle to keep up with the rapid pace of DevOps and technology change.

Enterprise IT vendors that weren’t founded with AIOps pedigrees have added data-driven capabilities to their software in 2018, while startups focused on AI features have turned heads, even among traditional enterprise companies. IT pros disagree on the ultimate extent of AI’s IT ops automation takeover. But IT infrastructure management software that taps data analytics for decision-making has replaced tribal knowledge and manual intervention at most companies.

For example, Dolby Laboratories, a sound system manufacturer based in San Francisco, replaced IT monitoring tools from multiple vendors with OpsRamp’s data-driven IT ops automation software, even though Dolby is wary of the industry’s AIOps buzz. OpsRamp monitors servers and network devices under one interface, and it can automatically discover network configuration information, such as subnets and devices attached to the network.

“You can very easily get a system into the monitoring workflow, whereas a technician with his own separate monitoring system might not take the last step to monitor something, and you have a problem when something goes down,” said Thomas Wong, Dolby’s senior director of enterprise applications. OpsRamp’s monitoring software alerts are based on thresholds, but they also suggest remediation actions.

Dolby’s “killer app” for OpsRamp’s IT ops automation is to patch servers and network devices, replacing manual procedures that required patches to be downloaded separately and identified by a human as critical, Wong said.

Still, Wong said Dolby will avoid OpsRamp version 5.0 for now, which introduced new AIOps capabilities in June 2018.

“We’re staying away from all of that,” he said. “I think it’s just the buzz right now.”

Data infiltrates IT infrastructure management software

While some users remain cautious or even skeptical of AIOps, IT infrastructure management software of every description — from container orchestration tools to IT monitoring and incident response utilities — now offer some form of analytics-driven automation. That ubiquity indicates at least some user demand, and IT pros everywhere must grapple with AIOps, as tools they already use add AI and analytics features.

PagerDuty, for example, has concentrated on data analytics and AI additions to its IT incident response software in 2017 and 2018. A new AI feature added in June 2018, Event Intelligence, identifies patterns in human incident remediation behavior and uses those patterns to understand service dependencies and communicate incident response suggestions to operators when new incidents occur.

“The best predictor of what someone will do in the future is what they actually do, not what they think they will do,” said Rachel Obstler, vice president of products at PagerDuty, based in San Francisco. “If a person sees five alerts and an hour later selects them together and says, ‘Resolve all,’ that tells us those things are all related better than looking at the alert payloads or the times they were delivered.”

PagerDuty users are intrigued by the new feature, but skeptical about IT ops automation tools’ reach into automated incident remediation based on such data.

“I can better understand the impact [of incidents] on our organization, where I need to make investments and why, and I like that it’s much more data-driven than it used to be,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis.

SPS has built webhook integrations between PagerDuty alerts and AWS Lambda functions to attach documentation to each alert, which saves time for teams to search a company wiki for information on how to resolve an alert. This integration also facilitates delivery of recent change information.

“But if I want to do something meaningful in response to alerts, I have to be inside my network,” Domeier said. “I don’t think PagerDuty would be able to do that kind of thing at scale, because everyone’s environment is different.”

From IT ops automation to AIOps

AIOps is far from mainstream, but more companies aspire to full data-driven IT ops automation. In TechTarget’s 2018 IT Priorities Survey, nearly as many people said they would adopt some form of AI (13.7%) as would embrace DevOps (14.5%). And IT infrastructure management software vendors have wasted no time to serve up AIOps features, as AI and machine learning buzz crests in the market.

Dynatrace’s IT monitoring tool performs predictive analytics and issues warnings to IT operators in shops such as Barbri, which offers legal bar review courses in Dallas.

“We just had critical performance issues surface recently that Dynatrace warned us about,” said Mark Kaplan, IT director at Barbri, which has used Dynatrace for four years. “We were able to react before our site went down.”

[AI and neural networks are] just an evolution of the standard statistics we’ve always used, and that evolution is much more human than most people believe.
Dennis Curryexecutive director and deputy CTO, Konica Minolta

The monitoring vendor released Dynatrace Artificial Virtual Intelligence System, or DAVIS, an AI-powered digital virtual assistant for IT operators, in early 2017. And Barbri now uses it frequently for IT root-cause analysis and incident response. Barbri will also evaluate Dynatrace log analytics features to possibly replace Splunk.

Kaplan has already grown accustomed to daily reports from DAVIS and wants it to do more, such as add a voice interface similar to Amazon Echo’s Alexa and automated incident response.

“We can already get to the point of self-remediation if we make the proper scripts in a convoluted setup,” he said. “But we see something smoother coming in the future.”

Since Barbri rolled out DAVIS, IT ops pros have embraced a more strategic role as infrastructure architects, rather than put out fires. Nevertheless, enterprises still insist on control. Even as AIOps tools push the boundaries of machine control over other machines, unattended AI remains a distant concept for IT infrastructure management software, if it ever becomes reality.

“No one’s talking about letting AI take over completely,” Kaplan said. “Then, you end up in a HAL 9000 situation.”

The future of AI looks very human

Konica Minolta Inc., a Japanese digital systems manufacturing company, teamed up with AIOps startup ScienceLogic for a new office printer product, called Workplace Hub, which can also deliver IT management services for SMB customers. ScienceLogic’s AIOps software will be embedded inside Workplace Hub and used on the back end at Konica Minolta to manage services for customers.

But AI will only be as valuable as the human decisions it enables, said Dennis Curry, executive director and deputy CTO at Konica Minolta. He, too, is skeptical about the idea of AI that functions unattended by humans and instead sees that AI will augment human intelligence both inside and outside of IT environments.

“AI is not a sudden invention — I worked in the mid-1990s for NATO on AI and neural networks, but there wasn’t a digital environment then where it could really flourish, and we have that now,” Curry said. “It’s just an evolution of the standard statistics we’ve always used, and that evolution is much more human than most people believe.”

Cisco hyper-converged HyperFlex adds NVMe-enabled model

Cisco is bumping up the performance of its HyperFlex hyper-converged infrastructure platform with nonvolatile memory express flash storage.

The networking specialist in July plans to broaden its hyper-converged infrastructure (HCI) options with Cisco HyperFlex All NVMe. The new Cisco hyper-converged system is an NVMe-enabled 1U HX220c M5 Unified Computing System (UCS) server that’s integrated with dual Intel Xeon Skylake processors, Nvidia GPUs and Intel Optane DC SSDs.

The HX C220 uses Intel Optane drives on the front end for caching. Four Intel 3D NAND NVMe SSDs of 8 TB each provide 32 TB of raw storage capacity per node. Optane SSDs are designed on Intel 3D XPoint memory technology.

HyperFlex 3.5 software extends Cisco Intersight compute and network analytics to storage. Version 3.5 supports virtual desktop infrastructure with Citrix Cloud services and hyper-convergence for SAP database applications.

“This is clearly a performance play for Cisco, with the addition of NVMe and support for Nvidia GPUs,” said Eric Slack, a senior analyst at Evaluator Group, a storage and IT research firm in Boulder, Colo. “They’re talking about SAP modernization. Cisco is going to try and sell hyper-converged to a lot of folks, but initially the targeting will be their UCS customer base. And that makes sense.”

Cisco: Hyper-converged use cases are expanding        

More than 2,600 customers have installed HyperFlex, many of whom already are Cisco hyper-converged UCS users, said Eugene Kim, a Cisco HyperFlex product marketing manager. He said customers are “pushing the limits” of HyperFlex for production storage.

“The all-flash HyperFlex we introduced [in 2017] comprises about 60% of our HCI sales. We see a lot of customers running mission-critical applications, and some customers are running 100% on HyperFlex,” Kim said.

Hyper-convergence relies on software-defined storage to eliminate the need for dedicated storage. An HCI system packages all the necessary computing resources — CPUs, networking, storage and virtualization tools — as a single integrated appliance. That’s different from converged infrastructure, in which customers can buy different components by the rack and bundle them together with a software stack.

We see a lot of customers running mission-critical applications, and some customers are running 100% on HyperFlex.
Eugene KimHyperFlex product marketing manager, Cisco

Cisco hyper-converged products were late to market compared with other HCI vendors. Cisco introduced HyperFlex in 2016 in partnership with Springpath, bundling the startup’s log-structured distributed file system with integrated Cisco networking. Cisco was an early investor in Springpath and eventually acquired it in 2017.

Cisco’s HCI market share jumped from 2.5% in the fourth quarter of 2016 to 4.5% in the fourth quarter of 2017, according to IDC. HyperFlex sales generated more than $56 million — a 200% increase year over year. Still, Cisco was in fourth place behind Dell, Nutanix and Hewlett Packard Enterprise in HCI hardware share, according to IDC.

As part of its partnership with Intel, Cisco added Intel Volume Management Device to HyperFlex 3.5. Intel VMD allows NVMe devices to be swapped out of the PCIe bus, avoiding a system shutdown.

Much of the heavy lifting for Cisco hyper-converged infrastructure was handled with the HyperFlex 3.0 release in January. It added Microsoft Hyper-V to support in addition to existing support for VMware hypervisors and the Cisco Container volume driver to launch persistent storage containers with Kubernetes.

Owning the compute, network and storage software gives Cisco hyper-converged systems an advantage over traditional hardware-software HCI bundles, said Vikas Ratna, a product manager at Cisco.

“We believe being able to optimize the stack up and down provides the best on-ramp for customers [to adopt HCI]. We don’t have to overengineer, as we would if we just owned the software layer,” Ratna said.

Customers can scale Cisco HyperFlex to 64 nodes per cluster. Ratna said Cisco plans to release a 2U HyperFlex that scales to 64 TB of raw storage per node when larger NVMe SSDs are generally available.