Tag Archives: shops

Enterprise IT weighs pros and cons of multi-cloud management

Multi-cloud management among enterprise IT shops is real, but the vision of routine container portability between clouds has yet to be realized for most.

Multi-cloud management is more common as enterprises embrace public clouds and deploy standardized infrastructure automation platforms, such as Kubernetes, within them. Most commonly, IT teams look to multi-cloud deployments for workload resiliency and disaster recovery, or as the most reasonable approach to combining companies with loyalty to different public cloud vendors through acquisition.

“Customers absolutely want and need multi-cloud, but it’s not the old naïve idea about porting stuff to arbitrage a few pennies in spot instance pricing,” said Charles Betz, analyst at Forrester Research. “It’s typically driven more by governance and regulatory compliance concerns, and pragmatic considerations around mergers and acquisitions.”

IT vendors have responded to this trend with a barrage of marketing around tools that can be used to deploy and manage workloads across multiple clouds. Most notably, IBM’s $34 billion bet on Red Hat revolves around multi-cloud management as a core business strategy for the combined companies, and Red Hat’s OpenShift Container Platform version 4.2 updated its Kubernetes cluster installer to support more clouds, including Azure and Google Cloud Platform. VMware and Rancher also use Kubernetes to anchor multi-cloud management strategies, and even cloud providers such as Google offer products such as Anthos with the goal of managing workloads across multiple clouds.

For some IT shops, easier multi-cloud management is a key factor in Kubernetes platform purchasing decisions.

“Every cloud provider has hosted Kubernetes, but we went with Rancher because we want to stay cloud-agnostic,” said David Sanftenberg, DevOps engineer at Cardano Risk Management Ltd, an investment consultancy firm in the U.K. “Cloud outages are rare, but it’s nice to know that on a whim we can spin up a cluster in another cloud.”

Multi-cloud management requires a deliberate approach

With Kubernetes and VMware virtual machines as common infrastructure templates, some companies use multiple cloud providers to meet specific business requirements.

Unified communications-as-a-service provider 8×8, in San Jose, Calif., maintains IT environments spread across 15 self-managed data centers, plus AWS, Google Cloud Platform, Tencent and Alibaba clouds. Since the company’s business is based on connecting clients through voice and video chat globally, placing workloads as close to customers’ locations as possible is imperative, and this makes managing multiple cloud service providers worthwhile. The company’s IT ops team keeps an eye on all its workloads with VMware’s Wavefront cloud  monitoring tool.

Dejan Deklich, chief product officer, 8x8Dejan Deklich

 “It’s all the same [infrastructure] templates, and all the monitoring and dashboards stay exactly the same, and it doesn’t really matter where [resources] are deployed,” said Dejan Deklich, chief product officer at 8×8. “Engineers don’t have to care where workloads are.”

Multiple times a year, Deklich estimated, the company uses container portability to move workloads between clouds when it gets a good deal on infrastructure costs, although it doesn’t move them in real time or spread apps among multiple clouds. Multi-cloud migration also only applies to a select number of 8×8’s workloads, Deklich said.

We made a conscious decision that we want to be able to move from cloud to cloud. It depends on how deep you go into integration with a given cloud provider.
Dejan DeklichChief product officer, 8×8

“If you’re in [AWS] and using RDS, you’re not going to be able to move to Oracle Cloud, or you’re going to suffer connectivity issues; you can make it work, but why would you?” he said. “There are workloads that can elegantly be moved, such as real-time voice or video distribution around the world, or analytics, as long as you have data associated with your processing, but moving large databases around is not a good idea.”

Maintaining multi-cloud portability also requires a deliberate approach to integration with each cloud provider.

“We made a conscious decision that we want to be able to move from cloud to cloud,” Deklich said. “It depends on how deep you go into integration with a given cloud provider — moving a container from one to the other is no problem if the application inside is not dependent on a cloud-specific infrastructure.”

The ‘lowest common denominator’ downside of multi-cloud

Not every organization buys in to the idea that multi-cloud management’s promise of freedom from cloud lock-in is worthwhile, and the use of container portability to move apps from cloud to cloud remains rare, according to analysts.

“Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices,” said Lauren Nelson, analyst at Forrester Research. “They are far less cautious when it comes to getting locked into public cloud services, especially if that lock in comes with great value.”

Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices. They are far less cautious when it comes to getting locked into public cloud services …
Lauren NelsonAnalyst, Forrester Research

In fact, some IT pros argue that lock-in is preferable to missing out on the value of cloud-specific secondary services, such as AWS Lambda.

“I am staunchly single cloud,” said Robert Alcorn, chief architect of platform and product operations at Education Advisory Board (EAB), a higher education research firm headquartered in Washington, D.C. “If you look at how AWS has accelerated its development over the last year or so, it makes multi-cloud almost a nonsensical question.”

For Alcorn, the value of integrating EAB’s GitLab pipelines with AWS Lambda outweighs the risk of lock-in to the AWS cloud. Connecting AWS Lambda and API Gateway to Amazon’s SageMaker for machine learning  has also represented almost a thousandfold drop in costs compared to the company’s previous container-based hosting platform, he said.

Even without the company’s interest in Lambda integration, the work required to keep applications fully cloud-neutral isn’t worth it for his company, Alcorn said.

“There’s a ceiling to what you can do in a truly agnostic way,” he said. “Hosted cloud services like ECS and EKS are also an order of magnitude simpler to manage. I don’t want to pay the overhead tax to be cloud-neutral.”

Some IT analysts also sound a note of caution about the value of multi-cloud management for disaster recovery or price negotiations with cloud vendors, depending on the organization. For example, some financial regulators require multi-cloud deployments for risk mitigation, but the worst case scenario of a complete cloud failure or the closure of a cloud provider’s entire business is highly unlikely, Forrester’s Nelson wrote in a March 2019 research report, “Assess the Pain-Gain Tradeoff of Multicloud Strategies.”

Splitting cloud deployments between multiple providers also may not give enterprises as much of a leg up in price negotiations as they expect, unless the customer is a very large organization, Nelson wrote in the report.

The risks of multi-cloud management are also manifold, according to Nelson’s report, from high costs for data ingress and egress between clouds to network latency and bandwidth issues, broader skills requirements for IT teams, and potentially double the resource costs to keep a second cloud deployment on standby for disaster recovery.

Of course, value is in the eye of the beholder, and each organization’s multi-cloud mileage may vary.

“I’d rather spend more for the company to be up and running, and not lose my job,” Cardano’s Sanftenberg said.

Go to Original Article
Author:

What are the Azure Stack HCI features?

IT shops that want tighter integration between the Windows Server OS and an HCI platform have a few choices in the market, including Azure Stack HCI.

Microsoft offers two similarly named but different offerings. Microsoft markets Azure Stack as a local extension to the cloud, essentially Azure in a box that runs in the data center. The company positions Azure Stack HCI, announced in March 2019, as a highly available, software-defined platform for local VM workload deployments. Organizations can also use Azure Stack HCI to connect to Azure and use its various services, including backup and site recovery.

Azure Stack HCI is fundamentally composed of four layers: hardware, software, management and cloud services.

Who sells the hardware for Azure Stack HCI?

Azure Stack HCI capitalizes on the benefits associated with other HCI offerings, such as high levels of software-driven integration, and common and consistent management. OEM vendors, including Dell, Fujitsu, HPE and Lenovo, sell the Azure Stack HCI hardware that Microsoft validates. The hardware is typically integrated and modular, combining portions of compute, memory, storage and network capacity into each unit.

What OS does Azure Stack HCI use?

The Azure Stack HCI platform runs on the Windows Server 2019 Datacenter edition. Using this server OS provides the familiar Windows environment, but also brings core components of the HCI software stack, including Hyper-V for virtualization, Storage Spaces Direct for storage, and enhanced software-defined networking features in Microsoft’s latest server OS.

How is Azure Stack HCI managed?

Azure Stack HCI capitalizes on the benefits associated with other HCI offerings, such as high levels of software-driven integration, and common and consistent management.

A critical part of an HCI platform is the ability to provision and monitor every element, which means management is a crucial component of Azure Stack HCI. Organizations have several management options such as Windows Admin Center, System Center, PowerShell and numerous third-party tools. Management in Azure Stack HCI emphasizes the use of automation and orchestration, allowing greater speed and autonomy in provisioning and reporting.

What role does the Azure cloud play?

Organizations that purchase Azure Stack HCI have the option to connect to a wide range of Azure services. Some of these services include Azure Site Recovery for high availability and disaster recovery, Azure Monitor for comprehensive monitoring and analytics, Azure Backup for data protection, and Azure File Sync for server synchronization with the cloud.

What’s the primary use for Azure Stack HCI?

When exploring whether to purchase Azure Stack HCI, it’s important to understand its intended purpose. Unlike Azure Stack, Azure Stack HCI is not explicitly designed for use with the Azure cloud. Rather, Azure Stack HCI is an HCI platform tailored for on-premises virtualization for organizations that want to maximize the use of the hardware.

The decision to buy Azure Stack HCI should be based primarily on the same considerations involved with any other HCI system. For example, HCI might be the route to go when replacing aging hardware, optimizing the consolidation of virtualized workloads, and building out efficient edge or remote data center deployments that take up minimal space.

IT decision-makers should view the ability to utilize Azure cloud services that, while useful, are not the primary motivation to use Azure Stack HCI.

Go to Original Article
Author:

BizDevOps, DevOps feedback loops guide IT transformation

For many IT shops, BizDevOps and DevOps feedback loops are the final stage of the digital transformation process, but one enterprise found them a useful place to start.

When American Fidelity Assurance Company, an insurance company in Oklahoma City that specializes in employee benefits, began its transition to DevOps in 2015, it rolled out continuous integration (CI) and deployment tools for application development, which is a customary starting point. At the same time, the company deployed Dynatrace monitoring tools to give developers fast feedback on the causes of software defects in production.

Dynatrace competes with other DevOps monitoring vendors such as New Relic and Cisco AppDynamics, which began with a focus on application performance monitoring, and have all added AI-driven automation and infrastructure monitoring features in recent years. Dynatrace was spun out of Compuware in 2014, while its main competitors were founded in 2008.

When American Fidelity first engaged with Dynatrace, its focus was on monitoring how customers interacted with digital products, or digital experience management, based on its 2015 acquisition of Keynote. Most DevOps monitoring tools now offer customer experience management features, but at the time, that was a Dynatrace specialty, and it helped American Fidelity compose its initial to-do list for application development.

This focus on DevOps feedback loops to guide developer workflows is something IT pros typically do much later in the DevOps transformation process, analysts say. 

“Monitoring is often an afterthought,” said Nancy Gohring, analyst at 451 Research. “People adopt new cloud technologies, then DevOps, but monitoring hasn’t been baked in, and they haven’t been prescriptive about how to approach it — and then they start running into problems.”

American Fidelity’s experience has been the opposite. While it deploys applications through a CI/CD pipeline, its IT infrastructure is still mostly on-premises VMware virtual machines, though a move to public cloud is underway. Rather than focus on cloud-native infrastructure automation, the company focused first on continuously improving the applications it delivers on the infrastructure it already had.

People often don’t realize that Dynatrace provides the ability to prioritize the areas of your applications that are most used for improvement. You can tell how many times something is called and how often it is called, and what will give you the most bang for your buck.
Gary CarrCloud infrastructure architect, American Fidelity

Even without highly complex microservices architectures, that infrastructure was becoming more complicated than IT staff could manage through manual intervention as the company deployed new network security devices and adopted microsegmentation.

“Developers did not have enough visibility to see all the connections between systems,” said Gary Carr, cloud infrastructure architect at American Fidelity. “They spent a lot of time troubleshooting the log files, exception messages, and even more time trying to reproduce issues in our development environments.”

Dynatrace sped up troubleshooting, but also helped the company prioritize which defects to fix first.

“People often don’t realize that Dynatrace provides the ability to prioritize the areas of your applications that are most used for improvement,” Carr said. “You can tell how many times something is called and how often it is called, and what will give you the most bang for your buck.”

BizDevOps feedback guides product roadmaps, software backlog

Developers and IT operations pros at American Fidelity gained visibility into applications deployed with DevOps feedback loops, but the company also gave application managers and marketing personnel access to Dynatrace dashboards, which help them make decisions on what to develop next. This practice, known as BizDevOps, is the ultimate goal for many companies that undergo digital transformation, but most enterprises still fall short of realizing that ideal.

Marketers at American Fidelity, however, already use Dynatrace dashboards and user experience monitoring tools to look at customers’ browser requirements, where website traffic is coming from, site response times and which products are used most. Dynatrace also analyzes those metrics to generate an overall customer experience rating that business managers use to determine what’s most in need of improvement.

In the meantime, American Fidelity’s DevOps teams rolled out advances in Dynatrace’s products as they emerged since 2015, such as the DAVIS data analytics system. DAVIS narrows down the root cause of IT incidents and generates ServiceNow support tickets for IT incident response. Dynatrace can also automate the response to incidents without human intervention, including rolling back problematic application deployments, but American Fidelity hasn’t yet used those features.

“While improving automation could possibly help us, there are times when the change that it’s going to make is not enough to take focus off our main projects,” Carr said. “Our goal is always to focus on our customers to make sure we can provide things like the fastest efficient claim process, the best enrollment system.”

Public cloud, containers and other advanced IT automation is on the to-do list for the company, as is exploring AIOps automation. In the meantime, Carr said he’d like to see Dynatrace offer the same kinds of troubleshooting and feedback for application security that it does for application performance.

“With Dynatrace, you’re already in the middle of transactions, you’re involved in the networking, you are involved in the data sources, and all the services,” Carr said. “It would be nice if Dynatrace actually had some implementation of security … in that same view.”

A Dynatrace spokesperson said the vendor is open to customer feedback on its roadmap, but that security is not a focus currently.

Go to Original Article
Author:

Third-party Kubernetes tools hone performance for database containerization

Enterprise IT shops that want to modernize legacy applications or embark on a database containerization project are the target users for Kubernetes tools released this month.

Robin Systems, which previously offered its own container orchestration utility, embraced Kubernetes with hyper-converged infrastructure software it claimed can optimize quality of service for database containerization, AI and machine learning applications, as well as big data applications, such as Spark and Hadoop. Turbonomic also furthered its Kubernetes optimization features with support for multi-cloud container management. Turbonomic’s Self-Managing Kubernetes tool could also help with database containerization, because it takes performance and cost optimization into account.

The products join other third-party Kubernetes management tools that must add value over features offered natively in pure upstream Kubernetes implementations — a difficult task as the container orchestration engine matures. However, the focus on database containerization and its performance challenges aligns with the enterprise market’s momentum, analysts said.

“For many vendors, performance is an afterthought, and the monitoring and management side is an afterthought,” said Milind Govekar, analyst with Gartner. “History keeps repeating itself along those lines, but now we can make mistakes faster because of automation and worse mistakes with containers, because they’re easier to spin up.”

While early adopters such as T-Mobile already use DC/OS for database containerization, most enterprises aren’t yet ready for stateful applications in containers.

“Stateless apps are still the low-hanging fruit,” said Jay Lyman, analyst with 451 Research. “It will be a slow transition for organizations pushing [containerization] into data-rich applications.”

Robin Systems claims superior database containerization approach

Now, we can make mistakes faster because of automation and worse mistakes with containers, because they’re easier to spin up.
Milind Govekaranalyst, Gartner

Robin Systems faces more of an uphill battle against both pure Kubernetes and established third-party tools with its focus on big data apps and database containerization. Mesosphere has already targeted this niche for years with DC/OS. And enterprises can also look to Red Hat OpenShift for database containerization, given the platform’s maturity and users’ familiarity with Red Hat’s products.

Robin Systems’ founders claimed better quality-of-service guarantees for individual containers and workloads than OpenShift and DC/OS, because the company designed and controls all levels of its software-defined infrastructure package, which includes network and storage management, in addition to container orchestration. It guarantees minimum and maximum application performance throughout the infrastructure — including CPU, memory, and network and storage IOPS allocations — within one policy, whereas competitors integrate with tools such as the open source Container Network Interface plug-in, OpenShift Container Storage and Portworx persistent storage.

Control over the design of the storage layer enables Robin’s platform to take cluster-wide snapshots of Kubernetes deployments and their associated applications, which isn’t possible natively on OpenShift or DC/OS yet.

Plenty of vendors claim a superior approach with their Kubernetes tools, and many major enterprise IT shops have already chosen a strategic Kubernetes vendor for production application development and deployment.

However, companies such as John Hancock also must modernize a massive portfolio of legacy applications, including IBM DB2 and Microsoft SQL Server databases in versions so old they’re no longer supported by the original manufacturers.

John Hancock, a Boston-based insurer and a division of financial services group Manulife Financial Corp., is conducting a proof of concept with Robin Systems, as it mulls database containerization for IBM DB2. The company wants to move the mainframe-based system into the Microsoft Azure cloud for development and testing, which it sees as simpler and more affordable than its current approach to managing internally developed production apps with Pivotal’s PaaS offering by a separate department.

“It’s not going to fly if [a database containerization platform] will take four people eight months to get working,” said Kurt Straube, systems director for the insurance company. Robin’s hyper-converged infrastructure approach, which bundles networking and storage with container and compute management, might be a shortcut to database containerization for legacy apps where ease of use and low cost are paramount.

Turbonomics Kubernetes tool
Turbonomic’s Kubernetes tool manages MongoDB performance.

Turbonomic targets container placement, not workloads

While Robin Systems’ platform approach puts it squarely into competition with PaaS vendors such as Red Hat OpenShift, Pivotal Container Service (PKS) and Mesosphere’s DC/OS, Turbonomic’s product spans Kubernetes platforms such as Amazon Elastic Container Service for Kubernetes, Azure Kubernetes Service, Google Kubernetes Engine and PKS.

Turbonomic’s Kubernetes tool optimizes container placement across different services, but doesn’t manage individual container workloads. It fills a potential need in the market, and it keeps Turbonomic out of direct competition with established Kubernetes tools.

“There are many PaaS vendors that can manage Kubernetes clusters, but what they can’t do is tell a user how to optimize the number of containers on a cluster so that the right resources are available to each container,” Gartner’s Govekar said.

A number of tools manage VM placement between multiple cloud infrastructure services, such as the Open Service Broker API. However, “many of these tools don’t do a great job from a performance optimization standpoint specifically,” Govekar said.

Box security gets a boost with built-in Shield

SAN FRANCISCO — Box shops will have the ability to get granular with a new built-in Box security feature, but organizations will have to find a role for the tool alongside their other security platforms.

Box Shield, which was introduced at the file-sharing company’s annual conference, BoxWorks, will detect anomalies and risky user behavior within Box. Experts here discussed the potential behind Box Shield and how it might integrate with existing security and identity management tools within businesses.

“Security is such a tough problem,” said James Sinur, vice president at Aragon Research, based in Morgan Hill, Calif. “I haven’t found any security software that covers all aspects of it.”

How Box Shield works

Box Shield has three main functionalities: smart access, anomaly detection and a content firewall.

Where I think [Box] will make their contribution is by adjusting policies.
James Sinurvice president at Aragon Research

Smart access enables end users and IT admins to classify Box files according to their level of confidentiality. Then, IT admins can apply policies based on those classifications.

Anomaly detection helps IT to discover compromised accounts and identify access abuse. For example, if an end user accesses Box from Guatemala and downloads large amounts of data, Box Shield will flag that as risky behavior.

The content firewall feature can go beyond two-factor authentication to verify external users and check the security of devices.

IT can also use Box Shield to uncover historical data about a user’s activity and access analytics about their behavior.

Box Shield tries to play nice with other security

Sinur said he expects customers to use Box Shield in conjunction with other security platforms.

“Where I think [Box] will make their contribution is by adjusting policies that govern those pieces of [content],” he said.

Box is well-known for a plethora of integrations with third-party platforms — from Google and Slack to Microsoft and Okta. The company is already identifying places where Box Shield would integrate with other cloud access security broker (CASB) services, CEO Aaron Levie said in a press conference. Customers with an existing security information management tool, for example, would be able to use Box Shield in conjunction with it, he said.

An IT security analyst at a financial institution who wanted to remain anonymous was very interested in the new tool. His company already has several security technologies in place, such as Symantec and Okta, and would use Box Shield in addition to those services, he said.

“From a nonmanaged versus managed device, it would help us keep track of what’s going in and what’s going out based off of the device control,” he added.

Box Shield, however, would potentially replace the company’s current mobile device management platform, MobileIron.

“It would frequently push certificates out and start managing our CASBs,” he said. “We would use Box to help identify patterns in data movement.”

Pricing concerns

Pricing details aren’t yet released, but organizations will have to pay an additional cost for Box Shield, according to the vendor.

Pencils of Promise, a nonprofit organization in New York, is interested in Box Shield — but only at an affordable cost, said Ben Bromberg, senior manager of data systems at the nonprofit.

“It does seem like the sort of thing that an organization like mine would appreciate, but I have a suspicion that it would be at a price point that would be out of our reach,” he said.  

Box Shield will be available in private beta later this year, the company said.

HPE’s HCI system takes aim at space-constrained data centers

The latest addition to HPE’s HCI portfolio aims to give smaller IT shops a little less bang for a lot less buck.

The HPE SimpliVity 2600 configures up to four compute modules in a 2U space, and features “always-on” deduplication and compression. Those capabilities often appeal to businesses with space-constrained IT environments or with no dedicated data center at all, particularly ones that deploy VDI applications on remote desktops for complex workloads and require only moderate storage.

Examples include branch offices, such as supermarkets or retailers with no dedicated data center room, who might likely keep a server in a manager’s office, said Thomas Goepel, director of HPE’s product management for hyper-converged systems.

Higher-end HPE HCI products, such as the SimpliVity 380, emphasize operational efficiencies, but their compute power may exceed the needs of many remote branch offices, and at a higher cost, so the 2600’s price-performance ratio may be more attractive, said Dana Gardner, principal analyst at Interarbor Solutions LLC in Gilford, N.H.

“Remote branch offices tend to look at lower-cost approaches over efficiencies,” he said. “Higher-end [HPE HCI systems] and in some cases the lower-end boxes, may not be the right fit for what we think of as a ROBO server.”

Dana Gardner, Interarbor SolutionsDana Gardner

On the other hand, many smaller IT shops lack internal technical talent and may struggle to implement more complex VDI workloads.

“[VDI] requires a lot of operational oversight to get it up and rolling and tuned in with the rest of the environment,” Gardner said.

The market for higher compute density HCI to run complex workloads that involve VDI applications represents a rich opportunity, concurred Steve McDowell, a senior analyst at Moor Insights & Strategy. “It’s a smart play for HPE, and should compete well against Nutanix,” he said.

There has been a tremendous appetite [among users] for HCI products in general because they come packaged and ready to install.
Dana Gardnerprincipal analyst, Interarbor Solutions

The HPE SimpliVity 2600, based on the company’s Apollo 2000 platform, also overlaps with HPE’s Edgeline systems unveiled last month, although there are distinct differences in the software stack and target applications, McDowell said. The 2600 is more of an appliance with a fixed feature set contained in a consolidated management framework.

The Edgeline offering, meanwhile, targets infrastructure consolidation out on the edge with a more even balance of compute, storage and networking capabilities.

Higher-end HPE HCI offerings have gained traction among corporate users. Revenues for these systems surged 280% in this year’s first quarter compared with a year ago, versus 76% growth for the overall HCI market, according to IDC, the market research firm based in Framingham, Mass.

“There has been a tremendous appetite for HCI products in general because they come packaged and ready to install,” Gardner said. “HPE is hoping to take advantage of this with iterations that allow them to expand their addressable market, in this case downward.”

The 2600 will be available sometime by mid-July, according to HPE.

Database DevOps tools bring stateful apps up to modern speed

DevOps shops can say goodbye to a major roadblock in rapid application development.

At this time in 2017, cultural backlash from database administrators (DBAs) and a lack of mature database DevOps tools made stateful applications a hindrance to the rapid, iterative changes made by Agile enterprise developers. But, now, enterprises have found both application and infrastructure tools that align databases with fast-moving DevOps pipelines.

“When the marketing department would make strategy changes, our databases couldn’t keep up,” said Matthew Haigh, data architect for U.K.-based babywear retailer Mamas & Papas. “If we got a marketing initiative Thursday evening, on Monday morning, they’d want to know the results. And we struggled to make changes that fast.”

Haigh’s team, which manages a Microsoft Power BI data warehouse for the company, has realigned itself around database DevOps tools from Redgate since 2017. The DBA team now refers to itself as the “DataOps” team, and it uses Microsoft’s Visual Studio Team Services to make as many as 15 to 20 daily changes to the retailer’s data warehouse during business hours.

Redgate’s SQL Monitor was the catalyst to improve collaboration between the company’s developers and DBAs. Haigh gave developers access to the monitoring tool interface and alerts through a Slack channel, so they could immediately see the effect of application changes on the data warehouse. They also use Redgate’s SQL Clone tool to spin up test databases themselves, as needed.

“There’s a major question when you’re starting DevOps: Do you try to change the culture first, or put tools in and hope change happens?” Haigh said. “In our case, the tools have prompted cultural change — not just for our DataOps team and dev teams, but also IT support.”

Database DevOps tools sync schemas

Redgate’s SQL Toolbelt suite is one of several tools enterprises can use to make rapid changes to database schemas while preserving data integrity. Redgate focuses on Microsoft SQL Server, while other vendors, such as Datical and DBmaestro, support a variety of databases, such as Oracle and MySQL. All of these tools track changes to database schemas from application updates and apply those changes more rapidly than traditional database management tools. They also integrate with CI/CD pipelines for automated database updates.

Radial Inc., an e-commerce company based in King of Prussia, Pa., and spun out of eBay in 2016, took a little more than two years to establish database DevOps processes with tools from Datical. In that time, the company has trimmed its app development processes that involve Oracle, SQL Server, MySQL and Sybase databases from days down to two or three hours.

“Our legacy apps, at one point, were deployed every two to three months, but we now have 30 to 40 microservices deployed in two-week sprints,” said Devon Siegfried, database architect for Radial. “Each of our microservices has a single purpose and its own data store with its own schema.”

That means Radial, a 7,000-employee multinational company, manages about 300 Oracle databases and about 130 instances of SQL Server. The largest database change log it’s processed through Datical’s tool involved more than 1,300 discrete changes.

“We liked Datical’s support for managing at the discrete-change level and forecasting the impact of changes before deployment,” Siegfried said. “It also has a good rules engine to enforce security and compliance standards.”

Datical’s tool is integrated with the company’s GoCD DevOps pipeline, but DBAs still manually kick off changes to databases in production. Siegfried said he hopes that will change in the next two months, when an update to Datical will allow it to detect finer-grained attributes of objects from legacy databases.

ING Bank Turkey looks to Datical competitor DBmaestro to link .NET developers who check in changes through Microsoft’s Team Foundation Server 2018 to its 20 TB Oracle core banking database. Before its DBmaestro rollout in November 2017, those developers manually tracked schema and script changes through the development and test stages and ensured the right ones deployed to production. DBmaestro now handles those tasks automatically.

“Developers no longer have to create deployment scripts or understand changes preproduction, which was not a safe practice and required more effort,” said Onder Altinkurt, IT product manager for ING Bank Turkey, based in Istanbul. “Now, we’re able to make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.”

Database DevOps tools abstract away infrastructure headaches

Consistent database schemas and deployment scripts through rapid application changes is an important part of DevOps practices with stateful applications, but there’s another side to that coin — infrastructure provisioning.

Stateful application management through containers and container orchestration tools such as Kubernetes is still in its early stages, but persistent container storage tools from Portworx Inc. and data management tools from Delphix have begun to help ease this burden, as well.

GE Digital put Portworx container storage into production to support its Predix platform in 2017, and GE Ventures later invested in the company.

Now, [developers] make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.
Onder AltinkurtIT product manager, ING Bank Turkey

“Previously, we had a DevOps process outlined. But if it ended at making a call to GE IT for a VM and storage provisioning, you give up the progress you made in reducing time to market,” said Abhishek Shukla, managing director at GE Ventures, based in Menlo Park, Calif. “Our DevOps engineering team also didn’t have enough time to call people in IT and do the infrastructure testing — all that had to go on in parallel with application development.”

Portworx allows developers to describe storage requirements such as capacity in code, and then triggers the provisioning at the infrastructure layer through container orchestration tools, such as Mesosphere and Kubernetes. The developer doesn’t have to open a ticket, wait for a storage administrator or understand the physical infrastructure. Portworx can arbitrate and facilitate data management between multiple container clusters, or between VMs and containers. As applications change and state is torn down, there is no clutter to clean up afterward, and Portworx can create snapshots and clone databases quickly for realistic test data sets.

Portworx doesn’t necessarily offer the same high-octane performance for databases as bare-metal servers, said a Portworx partner, Kris Watson, co-founder of ComputeStacks, which packages Portworx storage into its Docker-based container orchestration software for service-provider clients.

“You may take a minimal performance hit with software abstraction layers, but rapid iteration and reproducible copies of data are much more important these days than bare-metal performance,” Watson said.

The addition of software-based orchestration-to-database testing processes can drastically speed up app development, as Choice Hotels International discovered when it rolled out Delphix’s test data management software a little more than two years ago.

“Before that, we had never refreshed our test databases. And in the first year with Delphix, we refreshed them four or five times,” said Nick Suwyn, IT leader at the company, based in Rockville, Md. “That has cut down data-related errors in code and allowed for faster testing, because we can spin up a test environment in minutes versus taking all weekend.”

The company hasn’t introduced Delphix to all of its development teams, as it prioritizes a project to rewrite the company’s core reservation system on AWS. But most of the company’s developers have access to self-service test databases whenever they are needed, and Suwyn’s team will link Delphix test databases with the company’s Jenkins CI/CD pipelines, so developers can spin up test databases automatically through the Jenkins interface.

IT infrastructure automation boosts digital initiatives

With businesses becoming more digitally dependent and IT responsibilities outpacing budgets, IT shops are being forced to evolve. This transformation requires not just a change in infrastructure technology, but in the organization of IT personnel as well — an organizational makeover that often determines the success of digital business.

As firms drive new digital initiatives, such as developing digital products and services, using analytics and investing in application development, IT services have started to have a more direct effect on revenue opportunities. As a result, IT must become more responsive in order to speed up the delivery of those new services.

To improve responsiveness, IT shops often shift personnel to work directly with the line-of-business teams to understand their demands better. Companies add budget and headcount to address this increase in IT demands and support each new initiative, while simultaneously adding budget to support the increased infrastructure needed to handle the new initiatives. Or you could find a new way to get the same results.

The new way

Ultimately, it’s the desire to find innovative ways to dramatically reduce the cost of routine IT maintenance and management that drives demand for infrastructure transformation. The end result is an as-a-service infrastructure that frees existing personnel to cover the added responsibilities and speed delivery of IT services. Multiple emergent technologies, such as flash storage, deliver transformational benefits in terms of performance, efficiency and TCO that can help. Technologies like flash are only part of the story, however. Another possibility that’s just as beneficial is IT infrastructure automation.

Manual tasks inhibit digital business. Every hour a highly trained IT resource spends on a manual — and likely routine — task is an hour that could have been spent helping to drive a potential revenue-generating digital initiative. As businesses increase their IT infrastructure automation efforts, an emerging concept called composable infrastructure has gained interest.

With composable infrastructure, infrastructure is virtualized to let resources be dynamically and efficiently allocated to individual applications.

With composable infrastructure, IT infrastructure is virtualized to dynamically and efficiently allocate resources to individual applications. Composable infrastructure also provides the necessary analytics to fine-tune infrastructure. Ideally, software ensures the right resources are available at the right time, new resources can be added on demand, and capacity or performance can be contracted when demand changes. Cisco, Hewlett Packard Enterprise, Kaminario and other vendors promote the composable infrastructure concept.

There are several factors to consider as composable infrastructure gains traction:

  • The intelligence to drive IT infrastructure automation: Arguably the first step in any effort to automate IT is knowing what to automate, along with when and how to do it efficiently. How much performance and capacity does each application need? How much can the infrastructure provide? How will these demands change over time? Providing this information requires the right level of intelligence and predictive analytics to understand the nature of each application’s demand. Done right, this results in more efficient infrastructure design and a reduction in capital investment. An even more valuable likely benefit is in personnel resource savings, as this intelligence enables automatic tuning of the infrastructure.
  • Granularity of control: Intelligence is important, but the ability to use that intelligence offers the most tangible benefits. Composable infrastructure products typically provide controls, such as APIs, to enable programmatic management. In some cases, this lets the application automatically demand resources when it identifies increasing demand. The more likely near-term scenario is that these controls will be used to automate planned manual tasks, such as standing up infrastructure for the deployment of a new application. Or, for example, you could use the controls to automate the expansion of a virtual machine environment. As IT infrastructure automation efforts expand and the number of infrastructure elements — e.g., performance and capacity — that can be automatically controlled increases, the value of composable infrastructure increases.
  • Architectural scale: Every IT infrastructure option seems to be scalable these days. For composable infrastructure, capacity and even performance scalability are just part of the story. Necessary data services and data management must scale as well. In addition, for the infrastructure to support IT automation, a time element is added to that scale. So when a request for scale is made, the infrastructure must react in a timely and predictable manner. For this, composable infrastructure requires high-performing components and latency reduction across data interconnects.

    Nonvolatile memory express (NVMe) plays a role here. While some view NVMe as just faster flash, the low-latency interconnect is critical to a scalable IT infrastructure effort. Data services add latency, and reducing the latency of the data path lets these data services extend to a broader infrastructure. Additionally, flexible scale isn’t just about adding resources; it’s also about freeing up resources that can be better used elsewhere.

The end goal is to deliver an infrastructure that can respond effectively to automation and reduce the number of manual tasks that must be handled by IT. Composable infrastructure isn’t the only way to achieve IT infrastructure automation, however. Software-defined storage and converged infrastructure can also help automate IT and go a long way toward eliminating the enemy of digital business, manual IT tasks.

And the more manual your IT processes are, the less competitive you’ll be as a digital business. As businesses seek to build an as-a-service infrastructure, composable infrastructure is another innovative step to create and automatic an on-demand data center.

Time-series monitoring tools give high-resolution view of IT

DevOps shops use time-series monitoring tools to glean a nuanced, historical view of IT infrastructure that improves troubleshooting, autoscaling and capacity forecasting.

Time-series monitoring tools are based on time-series databases, which are optimized for time-stamped data collected continuously or at fine-grained intervals. Since they store fine-grained data for a longer term than many metrics-based traditional monitoring tools, they can be used to compare long-term trends in DevOps monitoring data and to bring together data from more diverse sources than the IT infrastructure alone to link developer and business activity with the behavior of the infrastructure.

Time-series monitoring tools include the open source project Prometheus, which is popular among Kubernetes shops, as well as commercial offerings from InfluxData and Wavefront, the latter of which VMware acquired last year.

DevOps monitoring with these tools gives enterprise IT shops such as Houghton Mifflin Harcourt, an educational book and software publisher based in Boston, a unified view of both business and IT infrastructure metrics. It does so over a longer period of time than the Datadog monitoring product the company used previously, which retains data for only up to 15 months in its Enterprise edition.

“Our business is very cyclical as an education company,” said Robert Allen, director of engineering at Houghton Mifflin Harcourt. “Right before the beginning of the school year, our usage goes way up, and we needed to be able to observe that [trend] year over year, going back several years.”

Allen’s engineering team got its first taste of InfluxData as a long-term storage back end for Prometheus, which at the time was limited in how much data could be held in its storage subsystem — Prometheus has since overhauled its storage system in version 2.0. Eventually, Allen and his team decided to work with InfluxData directly.

Houghton Mifflin Harcourt uses InfluxData to monitor traditional IT metrics, such as network performance, disk space, and CPU and memory utilization, in its Amazon Web Services (AWS) infrastructure, as well as developer activity in GitHub, such as pull requests and number of users. The company developed its own load-balancing system using Linkerd and Finagle. And InfluxData also collects data on network latencies in that system, and it ties in with Zipkin’s tracing tool to troubleshoot network performance issues.

Multiple years of highly granular infrastructure data empowers Allen’s team of just five people to support nearly 500 engineers who deliver applications to the company’s massive Apache Mesos data center infrastructure.

InfluxData platform

Time-series monitoring tools boost DevOps automation

Time-series data also allows DevOps teams to ask more nuanced questions about the infrastructure to inform troubleshooting decisions.

“It allows you to apply higher-level statistics to your data,” said Louis McCormack, lead DevOps engineer for Space Ape Games, a mobile video game developer based in London and an early adopter of Wavefront’s time-series monitoring tool. “Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?'”

Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?’
Louis McCormacklead DevOps engineer, Space Ape Games

Space Ape’s infrastructure to manage is smaller than Houghton Mifflin Harcourt’s, at about 600 AWS instances compared to about 64,000. But Space Ape also has highly seasonal business cycles, and time-series monitoring with Wavefront helps it not only to collect granular historical data, but also to scale the IT infrastructure in response to seasonal fluctuations in demand.

“A service in AWS consumes Wavefront data to make the decision about when to scale DynamoDB tables,” said Nic Walker, head of technical operations for Space Ape Games. “Auto scaling DynamoDB is something Amazon has only just released as a feature, and our version is still faster.”

The company’s apps use the Wavefront API to trigger the DynamoDB autoscaling, which makes the tool much more powerful, but also requires DevOps engineers to learn how to interact with the Wavefront query language, which isn’t always intuitive, Walker said. In Wavefront’s case, this learning curve is balanced by the software’s various prebuilt data visualization dashboards. This was the primary reason Walker’s team chose Wavefront over open source alternatives, such as Prometheus. Wavefront is also offered as a service, which takes the burden of data management out of Space Ape’s hands.

Houghton Mifflin Harcourt chose a different set of tradeoffs with InfluxData, which uses a SQL-like query language that was easy for developers to learn, but the DevOps team must work with outside consultants to build custom dashboards. Because that work isn’t finished, InfluxData has yet to completely replace Datadog at Houghton Mifflin Harcourt, though Allen said he hopes to make the switch this quarter.

Time-series monitoring tools scale up beyond the capacity of traditional metrics monitoring tools, but both companies said there’s room to improve performance when crunching large volumes of data in response to broad queries. Houghton Mifflin Harcourt, for example, queries millions of data points at the end of each month to calculate Amazon billing trends for each of its Elastic Compute Cloud instances.

“It still takes a little bit of a hit sometimes when you look at those tags, but [InfluxEnterprise version] 1.3 was a real improvement,” Allen said.

Allen added that he hopes to use InfluxData’s time-series monitoring tool to inform decisions about multi-cloud workload placement based on cost. Space Ape Games, meanwhile, will explore AI and machine learning capabilities available for Wavefront, though the jury’s still out for Walker and McCormack whether AIOps will be worth the time it takes to implement. In particular, Walker said he’s concerned about false positives from AI analysis against time-series data.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Docker with Kubernetes forges new container standard

The comingling of the two main competitors in container orchestration should bring IT shops a greater stability and consistency in container infrastructures over time.

Docker with Kubernetes will appear in the next versions of Docker Enterprise Edition and Community Edition, expected to be generally available in 1Q18, according to the company. This comes on the heels of support for Kubernetes in recent products from Mesosphere, Rancher and Cloud Foundry — an industry embrace that affirms Kubernetes as the standard for container orchestration, and expands choices available to enterprise IT organizations as containers go into production.

Kubernetes and Docker rose to popularity simultaneously and were always closely associated. However, they emerged independently, and changes to one would sometimes break the other. With Docker and Kubernetes formally aligned under the Cloud Native Computing Foundation, developers can more closely coordinate alterations and therefore likely eliminate such hitches.

“It has not always been a given that Kubernetes was going to work with Docker,” said Gary Chen, an analyst at IDC. “People who want Docker from the source and Kubernetes along with that can now get that integration from a single vendor.”

Docker with Kubernetes is a declaration of victory for Kubernetes, but it’s also a big change for the IT industry with a standard for orchestration in addition to the standard OCI runtime and format.

Gary Chen, analyst, IDCGary Chen

“It’s not something we ever had with servers or virtual machines,” Chen said. “This brings industry standardization to a whole new level.”

Container management vendors will seek new differentiations outside of raw orchestration, and enterprise IT users can evaluate new tools and consider new possibilities for multicloud interoperability.

Docker brings support for modernizing traditional enterprise apps, while Kubernetes is still favored for newer, stateless distributed applications. Their convergence will strengthen orchestration that spans enterprise IT operating systems and different types of cloud infrastructure, said E.T. Cook, chief advocate at Dallas-based consulting firm Etc.io.

“Unified tooling that can orchestrate across all of the different platforms offers enterprises a massive advantage,” he said.

Being able to bridge private data centers, public clouds, and Docker Swarm and Kubernetes orchestrators will make deploying the software that runs on those things easier.
Peter Nealonsolutions architect, Runkeeper

Container portability will also take on new flexibility and depth with increased compatibility between Docker and Kubernetes, said Peter Nealon, a solutions architect at Runkeeper, a mobile running app owned by ASICS, the Japanese athletic equipment retailer.

“Being able to bridge private data centers, public clouds, and Docker Swarm and Kubernetes orchestrators will make deploying the software that runs on those things easier,” Nealon said. “It will also be easier to provide the security and performance that apps need.”

The rich get richer with Docker and Kubernetes

Docker remains committed to its Swarm container orchestrator. But with heavy momentum on the Kubernetes side, some IT pros are concerned whether the market will sustain a healthy, long-term competition.

“I’m sure some folks will not like to see Kubernetes get another win, wanting choices,” said Michael Bishop, CTO at Alpha Vertex, a New York-based fintech startup, which uses Kubernetes. “But I’ll be happy to see even more developers [from Docker] working away at making it even more powerful.”

Meanwhile, enterprise IT consultants said their clients at large companies rarely mention Swarm.

“I personally have never seen anyone run Swarm in a production cluster,” said Enrico Bartz, system engineer at SVA in Hamburg, Germany.

Some SVA clients will consider Docker Enterprise Edition support for Kubernetes as it may offer a more streamlined and familiar developer interface and be easier to install and configure than Kubernetes alone, Bartz said. But Docker still faces stiff competition from other products, such as Red Hat OpenShift, which already makes Kubernetes easier to use for enterprise IT.

Some industry watchers also wonder if Docker with Kubernetes might be too late to preserve Docker Inc., and Swarm with it, in the long run.

“Two years ago or even a year ago there was more differentiation for Docker in terms of the security and networking features it could offer beyond Kubernetes,” said Chris Riley, director of solutions architecture at cPrime Inc., a consulting firm in Foster City, Calif., that focuses on Agile software development. “But the recent releases of Kubernetes have made up those gaps, and it’s closing the gaps in stateful application management.”

Amazon also waits in the wings with its own forthcoming Kubernetes as a service alternative, which users hope to see unveiled at the AWS Re:Invent conference next month. Some enterprise shops won’t evaluate Docker with Kubernetes until they see what Amazon can offer as a managed public cloud service.

“If there’s no AWS announcement that hugely expands the feature set around [the EC2 Container Service], it will open up a whole set of discussions around whether we deploy Kubernetes or Docker Swarm in the cloud, or consider other cloud providers,” Runkeeper’s Nealon said. “Our discussion has been focused on what container orchestration platform we will consume as a cloud service.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.