Tag Archives: shops

HPE’s HCI system takes aim at space-constrained data centers

The latest addition to HPE’s HCI portfolio aims to give smaller IT shops a little less bang for a lot less buck.

The HPE SimpliVity 2600 configures up to four compute modules in a 2U space, and features “always-on” deduplication and compression. Those capabilities often appeal to businesses with space-constrained IT environments or with no dedicated data center at all, particularly ones that deploy VDI applications on remote desktops for complex workloads and require only moderate storage.

Examples include branch offices, such as supermarkets or retailers with no dedicated data center room, who might likely keep a server in a manager’s office, said Thomas Goepel, director of HPE’s product management for hyper-converged systems.

Higher-end HPE HCI products, such as the SimpliVity 380, emphasize operational efficiencies, but their compute power may exceed the needs of many remote branch offices, and at a higher cost, so the 2600’s price-performance ratio may be more attractive, said Dana Gardner, principal analyst at Interarbor Solutions LLC in Gilford, N.H.

“Remote branch offices tend to look at lower-cost approaches over efficiencies,” he said. “Higher-end [HPE HCI systems] and in some cases the lower-end boxes, may not be the right fit for what we think of as a ROBO server.”

Dana Gardner, Interarbor SolutionsDana Gardner

On the other hand, many smaller IT shops lack internal technical talent and may struggle to implement more complex VDI workloads.

“[VDI] requires a lot of operational oversight to get it up and rolling and tuned in with the rest of the environment,” Gardner said.

The market for higher compute density HCI to run complex workloads that involve VDI applications represents a rich opportunity, concurred Steve McDowell, a senior analyst at Moor Insights & Strategy. “It’s a smart play for HPE, and should compete well against Nutanix,” he said.

There has been a tremendous appetite [among users] for HCI products in general because they come packaged and ready to install.
Dana Gardnerprincipal analyst, Interarbor Solutions

The HPE SimpliVity 2600, based on the company’s Apollo 2000 platform, also overlaps with HPE’s Edgeline systems unveiled last month, although there are distinct differences in the software stack and target applications, McDowell said. The 2600 is more of an appliance with a fixed feature set contained in a consolidated management framework.

The Edgeline offering, meanwhile, targets infrastructure consolidation out on the edge with a more even balance of compute, storage and networking capabilities.

Higher-end HPE HCI offerings have gained traction among corporate users. Revenues for these systems surged 280% in this year’s first quarter compared with a year ago, versus 76% growth for the overall HCI market, according to IDC, the market research firm based in Framingham, Mass.

“There has been a tremendous appetite for HCI products in general because they come packaged and ready to install,” Gardner said. “HPE is hoping to take advantage of this with iterations that allow them to expand their addressable market, in this case downward.”

The 2600 will be available sometime by mid-July, according to HPE.

Database DevOps tools bring stateful apps up to modern speed

DevOps shops can say goodbye to a major roadblock in rapid application development.

At this time in 2017, cultural backlash from database administrators (DBAs) and a lack of mature database DevOps tools made stateful applications a hindrance to the rapid, iterative changes made by Agile enterprise developers. But, now, enterprises have found both application and infrastructure tools that align databases with fast-moving DevOps pipelines.

“When the marketing department would make strategy changes, our databases couldn’t keep up,” said Matthew Haigh, data architect for U.K.-based babywear retailer Mamas & Papas. “If we got a marketing initiative Thursday evening, on Monday morning, they’d want to know the results. And we struggled to make changes that fast.”

Haigh’s team, which manages a Microsoft Power BI data warehouse for the company, has realigned itself around database DevOps tools from Redgate since 2017. The DBA team now refers to itself as the “DataOps” team, and it uses Microsoft’s Visual Studio Team Services to make as many as 15 to 20 daily changes to the retailer’s data warehouse during business hours.

Redgate’s SQL Monitor was the catalyst to improve collaboration between the company’s developers and DBAs. Haigh gave developers access to the monitoring tool interface and alerts through a Slack channel, so they could immediately see the effect of application changes on the data warehouse. They also use Redgate’s SQL Clone tool to spin up test databases themselves, as needed.

“There’s a major question when you’re starting DevOps: Do you try to change the culture first, or put tools in and hope change happens?” Haigh said. “In our case, the tools have prompted cultural change — not just for our DataOps team and dev teams, but also IT support.”

Database DevOps tools sync schemas

Redgate’s SQL Toolbelt suite is one of several tools enterprises can use to make rapid changes to database schemas while preserving data integrity. Redgate focuses on Microsoft SQL Server, while other vendors, such as Datical and DBmaestro, support a variety of databases, such as Oracle and MySQL. All of these tools track changes to database schemas from application updates and apply those changes more rapidly than traditional database management tools. They also integrate with CI/CD pipelines for automated database updates.

Radial Inc., an e-commerce company based in King of Prussia, Pa., and spun out of eBay in 2016, took a little more than two years to establish database DevOps processes with tools from Datical. In that time, the company has trimmed its app development processes that involve Oracle, SQL Server, MySQL and Sybase databases from days down to two or three hours.

“Our legacy apps, at one point, were deployed every two to three months, but we now have 30 to 40 microservices deployed in two-week sprints,” said Devon Siegfried, database architect for Radial. “Each of our microservices has a single purpose and its own data store with its own schema.”

That means Radial, a 7,000-employee multinational company, manages about 300 Oracle databases and about 130 instances of SQL Server. The largest database change log it’s processed through Datical’s tool involved more than 1,300 discrete changes.

“We liked Datical’s support for managing at the discrete-change level and forecasting the impact of changes before deployment,” Siegfried said. “It also has a good rules engine to enforce security and compliance standards.”

Datical’s tool is integrated with the company’s GoCD DevOps pipeline, but DBAs still manually kick off changes to databases in production. Siegfried said he hopes that will change in the next two months, when an update to Datical will allow it to detect finer-grained attributes of objects from legacy databases.

ING Bank Turkey looks to Datical competitor DBmaestro to link .NET developers who check in changes through Microsoft’s Team Foundation Server 2018 to its 20 TB Oracle core banking database. Before its DBmaestro rollout in November 2017, those developers manually tracked schema and script changes through the development and test stages and ensured the right ones deployed to production. DBmaestro now handles those tasks automatically.

“Developers no longer have to create deployment scripts or understand changes preproduction, which was not a safe practice and required more effort,” said Onder Altinkurt, IT product manager for ING Bank Turkey, based in Istanbul. “Now, we’re able to make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.”

Database DevOps tools abstract away infrastructure headaches

Consistent database schemas and deployment scripts through rapid application changes is an important part of DevOps practices with stateful applications, but there’s another side to that coin — infrastructure provisioning.

Stateful application management through containers and container orchestration tools such as Kubernetes is still in its early stages, but persistent container storage tools from Portworx Inc. and data management tools from Delphix have begun to help ease this burden, as well.

GE Digital put Portworx container storage into production to support its Predix platform in 2017, and GE Ventures later invested in the company.

Now, [developers] make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.
Onder AltinkurtIT product manager, ING Bank Turkey

“Previously, we had a DevOps process outlined. But if it ended at making a call to GE IT for a VM and storage provisioning, you give up the progress you made in reducing time to market,” said Abhishek Shukla, managing director at GE Ventures, based in Menlo Park, Calif. “Our DevOps engineering team also didn’t have enough time to call people in IT and do the infrastructure testing — all that had to go on in parallel with application development.”

Portworx allows developers to describe storage requirements such as capacity in code, and then triggers the provisioning at the infrastructure layer through container orchestration tools, such as Mesosphere and Kubernetes. The developer doesn’t have to open a ticket, wait for a storage administrator or understand the physical infrastructure. Portworx can arbitrate and facilitate data management between multiple container clusters, or between VMs and containers. As applications change and state is torn down, there is no clutter to clean up afterward, and Portworx can create snapshots and clone databases quickly for realistic test data sets.

Portworx doesn’t necessarily offer the same high-octane performance for databases as bare-metal servers, said a Portworx partner, Kris Watson, co-founder of ComputeStacks, which packages Portworx storage into its Docker-based container orchestration software for service-provider clients.

“You may take a minimal performance hit with software abstraction layers, but rapid iteration and reproducible copies of data are much more important these days than bare-metal performance,” Watson said.

The addition of software-based orchestration-to-database testing processes can drastically speed up app development, as Choice Hotels International discovered when it rolled out Delphix’s test data management software a little more than two years ago.

“Before that, we had never refreshed our test databases. And in the first year with Delphix, we refreshed them four or five times,” said Nick Suwyn, IT leader at the company, based in Rockville, Md. “That has cut down data-related errors in code and allowed for faster testing, because we can spin up a test environment in minutes versus taking all weekend.”

The company hasn’t introduced Delphix to all of its development teams, as it prioritizes a project to rewrite the company’s core reservation system on AWS. But most of the company’s developers have access to self-service test databases whenever they are needed, and Suwyn’s team will link Delphix test databases with the company’s Jenkins CI/CD pipelines, so developers can spin up test databases automatically through the Jenkins interface.

IT infrastructure automation boosts digital initiatives

With businesses becoming more digitally dependent and IT responsibilities outpacing budgets, IT shops are being forced to evolve. This transformation requires not just a change in infrastructure technology, but in the organization of IT personnel as well — an organizational makeover that often determines the success of digital business.

As firms drive new digital initiatives, such as developing digital products and services, using analytics and investing in application development, IT services have started to have a more direct effect on revenue opportunities. As a result, IT must become more responsive in order to speed up the delivery of those new services.

To improve responsiveness, IT shops often shift personnel to work directly with the line-of-business teams to understand their demands better. Companies add budget and headcount to address this increase in IT demands and support each new initiative, while simultaneously adding budget to support the increased infrastructure needed to handle the new initiatives. Or you could find a new way to get the same results.

The new way

Ultimately, it’s the desire to find innovative ways to dramatically reduce the cost of routine IT maintenance and management that drives demand for infrastructure transformation. The end result is an as-a-service infrastructure that frees existing personnel to cover the added responsibilities and speed delivery of IT services. Multiple emergent technologies, such as flash storage, deliver transformational benefits in terms of performance, efficiency and TCO that can help. Technologies like flash are only part of the story, however. Another possibility that’s just as beneficial is IT infrastructure automation.

Manual tasks inhibit digital business. Every hour a highly trained IT resource spends on a manual — and likely routine — task is an hour that could have been spent helping to drive a potential revenue-generating digital initiative. As businesses increase their IT infrastructure automation efforts, an emerging concept called composable infrastructure has gained interest.

With composable infrastructure, infrastructure is virtualized to let resources be dynamically and efficiently allocated to individual applications.

With composable infrastructure, IT infrastructure is virtualized to dynamically and efficiently allocate resources to individual applications. Composable infrastructure also provides the necessary analytics to fine-tune infrastructure. Ideally, software ensures the right resources are available at the right time, new resources can be added on demand, and capacity or performance can be contracted when demand changes. Cisco, Hewlett Packard Enterprise, Kaminario and other vendors promote the composable infrastructure concept.

There are several factors to consider as composable infrastructure gains traction:

  • The intelligence to drive IT infrastructure automation: Arguably the first step in any effort to automate IT is knowing what to automate, along with when and how to do it efficiently. How much performance and capacity does each application need? How much can the infrastructure provide? How will these demands change over time? Providing this information requires the right level of intelligence and predictive analytics to understand the nature of each application’s demand. Done right, this results in more efficient infrastructure design and a reduction in capital investment. An even more valuable likely benefit is in personnel resource savings, as this intelligence enables automatic tuning of the infrastructure.
  • Granularity of control: Intelligence is important, but the ability to use that intelligence offers the most tangible benefits. Composable infrastructure products typically provide controls, such as APIs, to enable programmatic management. In some cases, this lets the application automatically demand resources when it identifies increasing demand. The more likely near-term scenario is that these controls will be used to automate planned manual tasks, such as standing up infrastructure for the deployment of a new application. Or, for example, you could use the controls to automate the expansion of a virtual machine environment. As IT infrastructure automation efforts expand and the number of infrastructure elements — e.g., performance and capacity — that can be automatically controlled increases, the value of composable infrastructure increases.
  • Architectural scale: Every IT infrastructure option seems to be scalable these days. For composable infrastructure, capacity and even performance scalability are just part of the story. Necessary data services and data management must scale as well. In addition, for the infrastructure to support IT automation, a time element is added to that scale. So when a request for scale is made, the infrastructure must react in a timely and predictable manner. For this, composable infrastructure requires high-performing components and latency reduction across data interconnects.

    Nonvolatile memory express (NVMe) plays a role here. While some view NVMe as just faster flash, the low-latency interconnect is critical to a scalable IT infrastructure effort. Data services add latency, and reducing the latency of the data path lets these data services extend to a broader infrastructure. Additionally, flexible scale isn’t just about adding resources; it’s also about freeing up resources that can be better used elsewhere.

The end goal is to deliver an infrastructure that can respond effectively to automation and reduce the number of manual tasks that must be handled by IT. Composable infrastructure isn’t the only way to achieve IT infrastructure automation, however. Software-defined storage and converged infrastructure can also help automate IT and go a long way toward eliminating the enemy of digital business, manual IT tasks.

And the more manual your IT processes are, the less competitive you’ll be as a digital business. As businesses seek to build an as-a-service infrastructure, composable infrastructure is another innovative step to create and automatic an on-demand data center.

Time-series monitoring tools give high-resolution view of IT

DevOps shops use time-series monitoring tools to glean a nuanced, historical view of IT infrastructure that improves troubleshooting, autoscaling and capacity forecasting.

Time-series monitoring tools are based on time-series databases, which are optimized for time-stamped data collected continuously or at fine-grained intervals. Since they store fine-grained data for a longer term than many metrics-based traditional monitoring tools, they can be used to compare long-term trends in DevOps monitoring data and to bring together data from more diverse sources than the IT infrastructure alone to link developer and business activity with the behavior of the infrastructure.

Time-series monitoring tools include the open source project Prometheus, which is popular among Kubernetes shops, as well as commercial offerings from InfluxData and Wavefront, the latter of which VMware acquired last year.

DevOps monitoring with these tools gives enterprise IT shops such as Houghton Mifflin Harcourt, an educational book and software publisher based in Boston, a unified view of both business and IT infrastructure metrics. It does so over a longer period of time than the Datadog monitoring product the company used previously, which retains data for only up to 15 months in its Enterprise edition.

“Our business is very cyclical as an education company,” said Robert Allen, director of engineering at Houghton Mifflin Harcourt. “Right before the beginning of the school year, our usage goes way up, and we needed to be able to observe that [trend] year over year, going back several years.”

Allen’s engineering team got its first taste of InfluxData as a long-term storage back end for Prometheus, which at the time was limited in how much data could be held in its storage subsystem — Prometheus has since overhauled its storage system in version 2.0. Eventually, Allen and his team decided to work with InfluxData directly.

Houghton Mifflin Harcourt uses InfluxData to monitor traditional IT metrics, such as network performance, disk space, and CPU and memory utilization, in its Amazon Web Services (AWS) infrastructure, as well as developer activity in GitHub, such as pull requests and number of users. The company developed its own load-balancing system using Linkerd and Finagle. And InfluxData also collects data on network latencies in that system, and it ties in with Zipkin’s tracing tool to troubleshoot network performance issues.

Multiple years of highly granular infrastructure data empowers Allen’s team of just five people to support nearly 500 engineers who deliver applications to the company’s massive Apache Mesos data center infrastructure.

InfluxData platform

Time-series monitoring tools boost DevOps automation

Time-series data also allows DevOps teams to ask more nuanced questions about the infrastructure to inform troubleshooting decisions.

“It allows you to apply higher-level statistics to your data,” said Louis McCormack, lead DevOps engineer for Space Ape Games, a mobile video game developer based in London and an early adopter of Wavefront’s time-series monitoring tool. “Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?'”

Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?’
Louis McCormacklead DevOps engineer, Space Ape Games

Space Ape’s infrastructure to manage is smaller than Houghton Mifflin Harcourt’s, at about 600 AWS instances compared to about 64,000. But Space Ape also has highly seasonal business cycles, and time-series monitoring with Wavefront helps it not only to collect granular historical data, but also to scale the IT infrastructure in response to seasonal fluctuations in demand.

“A service in AWS consumes Wavefront data to make the decision about when to scale DynamoDB tables,” said Nic Walker, head of technical operations for Space Ape Games. “Auto scaling DynamoDB is something Amazon has only just released as a feature, and our version is still faster.”

The company’s apps use the Wavefront API to trigger the DynamoDB autoscaling, which makes the tool much more powerful, but also requires DevOps engineers to learn how to interact with the Wavefront query language, which isn’t always intuitive, Walker said. In Wavefront’s case, this learning curve is balanced by the software’s various prebuilt data visualization dashboards. This was the primary reason Walker’s team chose Wavefront over open source alternatives, such as Prometheus. Wavefront is also offered as a service, which takes the burden of data management out of Space Ape’s hands.

Houghton Mifflin Harcourt chose a different set of tradeoffs with InfluxData, which uses a SQL-like query language that was easy for developers to learn, but the DevOps team must work with outside consultants to build custom dashboards. Because that work isn’t finished, InfluxData has yet to completely replace Datadog at Houghton Mifflin Harcourt, though Allen said he hopes to make the switch this quarter.

Time-series monitoring tools scale up beyond the capacity of traditional metrics monitoring tools, but both companies said there’s room to improve performance when crunching large volumes of data in response to broad queries. Houghton Mifflin Harcourt, for example, queries millions of data points at the end of each month to calculate Amazon billing trends for each of its Elastic Compute Cloud instances.

“It still takes a little bit of a hit sometimes when you look at those tags, but [InfluxEnterprise version] 1.3 was a real improvement,” Allen said.

Allen added that he hopes to use InfluxData’s time-series monitoring tool to inform decisions about multi-cloud workload placement based on cost. Space Ape Games, meanwhile, will explore AI and machine learning capabilities available for Wavefront, though the jury’s still out for Walker and McCormack whether AIOps will be worth the time it takes to implement. In particular, Walker said he’s concerned about false positives from AI analysis against time-series data.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Docker with Kubernetes forges new container standard

The comingling of the two main competitors in container orchestration should bring IT shops a greater stability and consistency in container infrastructures over time.

Docker with Kubernetes will appear in the next versions of Docker Enterprise Edition and Community Edition, expected to be generally available in 1Q18, according to the company. This comes on the heels of support for Kubernetes in recent products from Mesosphere, Rancher and Cloud Foundry — an industry embrace that affirms Kubernetes as the standard for container orchestration, and expands choices available to enterprise IT organizations as containers go into production.

Kubernetes and Docker rose to popularity simultaneously and were always closely associated. However, they emerged independently, and changes to one would sometimes break the other. With Docker and Kubernetes formally aligned under the Cloud Native Computing Foundation, developers can more closely coordinate alterations and therefore likely eliminate such hitches.

“It has not always been a given that Kubernetes was going to work with Docker,” said Gary Chen, an analyst at IDC. “People who want Docker from the source and Kubernetes along with that can now get that integration from a single vendor.”

Docker with Kubernetes is a declaration of victory for Kubernetes, but it’s also a big change for the IT industry with a standard for orchestration in addition to the standard OCI runtime and format.

Gary Chen, analyst, IDCGary Chen

“It’s not something we ever had with servers or virtual machines,” Chen said. “This brings industry standardization to a whole new level.”

Container management vendors will seek new differentiations outside of raw orchestration, and enterprise IT users can evaluate new tools and consider new possibilities for multicloud interoperability.

Docker brings support for modernizing traditional enterprise apps, while Kubernetes is still favored for newer, stateless distributed applications. Their convergence will strengthen orchestration that spans enterprise IT operating systems and different types of cloud infrastructure, said E.T. Cook, chief advocate at Dallas-based consulting firm Etc.io.

“Unified tooling that can orchestrate across all of the different platforms offers enterprises a massive advantage,” he said.

Being able to bridge private data centers, public clouds, and Docker Swarm and Kubernetes orchestrators will make deploying the software that runs on those things easier.
Peter Nealonsolutions architect, Runkeeper

Container portability will also take on new flexibility and depth with increased compatibility between Docker and Kubernetes, said Peter Nealon, a solutions architect at Runkeeper, a mobile running app owned by ASICS, the Japanese athletic equipment retailer.

“Being able to bridge private data centers, public clouds, and Docker Swarm and Kubernetes orchestrators will make deploying the software that runs on those things easier,” Nealon said. “It will also be easier to provide the security and performance that apps need.”

The rich get richer with Docker and Kubernetes

Docker remains committed to its Swarm container orchestrator. But with heavy momentum on the Kubernetes side, some IT pros are concerned whether the market will sustain a healthy, long-term competition.

“I’m sure some folks will not like to see Kubernetes get another win, wanting choices,” said Michael Bishop, CTO at Alpha Vertex, a New York-based fintech startup, which uses Kubernetes. “But I’ll be happy to see even more developers [from Docker] working away at making it even more powerful.”

Meanwhile, enterprise IT consultants said their clients at large companies rarely mention Swarm.

“I personally have never seen anyone run Swarm in a production cluster,” said Enrico Bartz, system engineer at SVA in Hamburg, Germany.

Some SVA clients will consider Docker Enterprise Edition support for Kubernetes as it may offer a more streamlined and familiar developer interface and be easier to install and configure than Kubernetes alone, Bartz said. But Docker still faces stiff competition from other products, such as Red Hat OpenShift, which already makes Kubernetes easier to use for enterprise IT.

Some industry watchers also wonder if Docker with Kubernetes might be too late to preserve Docker Inc., and Swarm with it, in the long run.

“Two years ago or even a year ago there was more differentiation for Docker in terms of the security and networking features it could offer beyond Kubernetes,” said Chris Riley, director of solutions architecture at cPrime Inc., a consulting firm in Foster City, Calif., that focuses on Agile software development. “But the recent releases of Kubernetes have made up those gaps, and it’s closing the gaps in stateful application management.”

Amazon also waits in the wings with its own forthcoming Kubernetes as a service alternative, which users hope to see unveiled at the AWS Re:Invent conference next month. Some enterprise shops won’t evaluate Docker with Kubernetes until they see what Amazon can offer as a managed public cloud service.

“If there’s no AWS announcement that hugely expands the feature set around [the EC2 Container Service], it will open up a whole set of discussions around whether we deploy Kubernetes or Docker Swarm in the cloud, or consider other cloud providers,” Runkeeper’s Nealon said. “Our discussion has been focused on what container orchestration platform we will consume as a cloud service.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Box Skills, machine learning technology pique IT interest

SAN FRANCISCO — Box shops will be able to help users gain more intelligent insight into their content with new machine learning technology in the content management tool.

Box Skills, introduced here at the company’s annual BoxWorks conference, makes it easier to search for visual and audio content and view information about it. Box Feed uses machine learning to curate content for specific users. Plus, new features in Box Relay aim to improve employee workflows. These capabilities caught the interest of attendees at the show.

“It was kind of nice to see Box incorporating [AI] to start relaying things to certain people at the right time in the right place,” said Ryan Foltz, business systems engineer at Barnhardt Manufacturing Company in Charlotte, N.C.

How Box Skills works

Box Skills is a framework that serves as a layer of abstraction between the content organizations upload to Box and the machine learning. It focuses on three areas: Image Intelligence, Audio Intelligence and Video Intelligence.

With the Image Intelligence component, based on Google Cloud Platform technology, Box automatically tags aspects of an image such as the subject, colors and logos, as well as uploads any text from it. Users can click the tags to access other images with similar contents.

The whole workflow looks really nice.
Will Sheppardtechnical support specialist, The Enthusiast Network

Video Intelligence uses Microsoft Cognitive Services to provide facial recognition to identify people in a video. It also can show users where repeated phrases come up, and extracts a transcript of the video that users can apply as closed captioning. Audio Intelligence functions similarly, without the visual aspect, and is based on IBM Watson technology.

Using the new Box Skills Kit for developers, organizations can also customize what information within a file the machine learning technology tracks. The tool can track tone of voice in a phone conversation, for example, or pull out specific words a company is interested in and show within the Box content when those words were said. Developers can also customize information in documents such as invoices or contracts, and have Box extract information such as dates, signatures, payment amounts and vendor names. That not only extracts the data, but allows users to fill that information in automatically moving forward.

Image Intelligence is currently in beta, and Video Intelligence and Audio Intelligence will come to beta in 2018, Box said.

Box Feed puts relevant information in front of users

Box Feed, powered by Box Graph machine learning technology, was also previewed at the conference and will be available next year. This feature can help users find the content most relevant to them. It shows users active content — files they have been working on or are mentioned in — as well as other relevant content, which appears in a feed based on who is working on the file and what the content is. If a user generally collaborates with another user who is working on a document, for example, it will likely show up in the relevant section. It also shows trending files, or ones that many users throughout the organization are accessing. 

As interesting as these new features are, some companies might need some time to apply them. Barnhardt Manufacturing Company, for instance, is an old organization, but its leaders are getting more and more interested in business data intelligence, said Pete Chantry, application systems manager at the company.

 “We’ve got to allow a little bit of time for them to get accustomed to the basic [enterprise content management] features of Box,” Chantry said.

Updates to Box Relay

Box Relay for workflow automation, announced last year and generally available next month, will get some enhancements as well.

First, the add-on will allow workflows to launch automatically, so if a user uploads a resume of a prospective employee for example, the workflow associated with that kind of document will start automatically. Box also plans to release APIs so IT can integrate Relay with existing third-party applications and automated processes. In addition, users will be able to e-sign documents directly in Box. Finally, a new dashboard will let users manage multiple workflows at the same time by showing every active workflow and what step it is on.   

“I like the way that all ties together,” said Will Sheppard, technical support specialist at The Enthusiast Network based in Los Angeles. “The whole workflow looks really nice.”

Other new features in Box Relay include the ability to invite other users to edit a document and assign them tasks with due dates within the document. There is also a new annotation tool that allows users to write a comment on a specific aspect of a document and tag other users to look at that exact area.

In addition, users no longer have to download previous versions of a document; they can preview them with a single click. Plus, when a user accesses a document, Box will highlight any changes that other users have made since the last time he was in it, and show which user made the edits. Finally, users can thread comments and mark them as resolved.   

Like Box Skills, Relay presents some enticing features for IT, but those at Barnhardt Manufacturing Company are unsure of how to apply Relay immediately.

“I don’t know how often we’d use it, but if we had it, it’d certainly be a nice feature for us,” Foltz said.

Windows DevOps tools rehab legacy enterprise applications

As Microsoft shops struggle to modernize legacy apps that weren’t designed for distributed cloud environments, they must also rethink the infrastructure where these apps are deployed.

Most enterprises have at least one application that’s so old, no one on the current IT team recalls how it was written, nor understands the finer intricacies of its management. Now, these companies must weigh the risks and costs to refactor these apps for a cloud-first, continuously developed world.

“It’s always an investment to replace something which does exactly what you need, but it’s just old software,” said Thomas Maurer, cloud architect for Switzerland-based itnetX, a consulting firm that works with large enterprise clients in Europe. “Traditional, classic enterprise apps cannot just be migrated into the DevOps world in a nice way — they may have dependencies on legacy code, or they’re not designed to scale out.”

Windows DevOps tools have improved, and IT shops are finding ways to link them together. But many client-server apps in the Windows world, particularly rich-client apps, don’t lend themselves well to continuous development or rapid provisioning, said Chris Riley, DevOps analyst at Fixate IO, a content strategy consulting firm based in Livermore, Calif., and a TechTarget contributor. Riley has developed Windows applications, such as SharePoint.

Some standard client-server applications must be compiled before they are tested. Dependencies and prerequisites also bog down legacy Windows apps; installing older versions of SQL Server or SharePoint takes days. Some legacy Windows environments also function best when apps are installed locally on the developer’s machine, whereas web and mobile applications typically integrate with REST APIs and avoid binary codes on a local machine, Riley said.

Without the ability to spin up development and test environments easily, organizations tend to reuse one test bed.

“This severely limits when you can do your testing, because you don’t want to pollute that environment, or make a mistake and rebuild it,” Riley said. “Whereas in DevOps, it should be easy to make a mistake — you actually want to do that and move forward.”

Windows DevOps tools give legacy apps a makeover

If organizations decide to refactor legacy apps to run in a more cloud-native fashion, they can first use tools and services from Microsoft partners to help make those apps more efficient to test and deploy.

“Skytap and CloudShare provide on-demand environments for these tools,” Riley said. “So, you can spin up a new database environment in 15 minutes instead of days, and then delete it, then spin it up again.”

The two companies take different approaches to hosting legacy apps on flexible cloud infrastructure. For example, Skytap Cloud supports more, older versions of Windows than Microsoft does, so customers can modernize apps at their own pace. CloudShare’s on-demand versions of Windows apps, meanwhile, are “somewhere between hard [and] impossible to run on the commodity clouds like Amazon [Web Services],” said Muly Gottlieb, the company’s vice president of R&D.

CloudShare, a 10-year-old privately funded Israeli SaaS company, lets users set policies and spin up and down dev and test environments with complex traditional apps, such as SharePoint and SQL Server. The service can accommodate customers that aren’t a good fit for Azure Cloud services, such as VMware shops that support legacy Microsoft apps.

Legacy apps set up in CloudShare’s environment can circumvent problems around fast and ephemeral provisioning and provide workable dev and test services in Windows DevOps shops.

“In the past, developers would all share five or 10 master labs, which is bad for velocity, and lab scarcity is a productivity-killer,” Gottlieb said. With this approach, code is not always reproducible, and environments spun up from snapshots aren’t always consistent.

Electric Cloud has a similar offering in the Windows DevOps tools arena, called ElectricAccelerator, which automatically parses legacy apps into distributed form and speeds up dev and test. Startups such as IncrediBuild and Thriftly also look to optimize dev and test for legacy Windows apps. Third-party services, such as Zapier, attach REST APIs to legacy applications and bring them a step closer to the Windows DevOps world.

Good ol’ trusty VMs can give Windows apps a leg up

For on-premises IT organizations, advanced automation features within virtual machines also provide a steppingstone to modernize with containers and microservices.

“There are ways to build this agility, and it’s going back to taking another look at how we use virtual machines,” Riley said. “Companies can treat VMs exactly how they were supposed to be treated, which is more like containers.”

Companies can treat VMs exactly how they were supposed to be treated, which is more like containers.
Chris RileyDevOps analyst, Fixate IO

Enterprises can use a VM template with heavy applications to spin up and delete environments for virtualized legacy apps. It’s not as fast as containers, but there’s much more agility than what users may have had previously, Riley said. VM templates can call for Microsoft Visual Studio to be automatically installed at startup and linked to a source repository, so developers could pull down a branch, write code, test it, commit and destroy the environment — and then do it all over again in a new VM.

VM-based automation works well with rich-client apps, where heavy dependencies and prerequisites make it tricky to test functions with Windows DevOps tools, said Anthony Terra III, manager of software architecture and development at a law firm in the Philadelphia area.

“The only difference is that you need to run that rich-client application in a shell or a separate VM,” Terra said. “Normally, we have that VM already built, deploy the code to the VM and run it that way.”

Terra’s company also uses a Microsoft database tool called a Data-tier Application Component Package (DACPAC) to smooth the delivery of updates to SQL Server VMs.

“You have the ability to create, change and delete tables, but it never actually interacts with the database,” Terra said of DACPAC. “It creates a change set file, which can be run against any database that has the same structure.”

When code is deployed to dev, test or quality-assurance infrastructures, the Windows Microsoft DevOps tool Team Foundation Server calls on DACPAC’s change set file and applies the changes to the database environment. Terra’s firm has added some safety guards: If a change could cause data loss, for example, the build fails.

The firm plans a move to containers in the coming year, but for now, VMs can slot in with Windows DevOps pipeline tools for a more consistent process.

“You’re not having people build VMs anymore because a tool is building them,” Terra said. “There is some fear in adopting something like that, but I think it’s misplaced, because it’s not like there’s less work because of it — the work you’re doing is just more focused on what’s around it.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Windows DevOps shops quickly gain on Linux counterparts

Almost overnight, Windows DevOps has gained ground on the open source world.

Windows shops have a well-earned reputation for conservatism, and a deeply entrenched set of legacy enterprise applications that often hinder automated application development. However, Microsoft products have recently focused on Windows DevOps support. There’s still work to do to underpin Windows container orchestration, but IT pros in Windows DevOps shops are determined to break free of stodgy stereotypes.

Those stereotypes are based in reality. Microsoft shops have been reluctant to deploy early versions of products, and some service providers and consultants that work with Windows-focused enterprises still encounter that history as they contend with DevOps.

In the last three years, Microsoft has lagged behind its open source counterparts in offering DevOps products, particularly for continuous deployment and application release automation, critics said. That lag, plus being locked in to Microsoft tools, is what holds back Windows DevOps.

“Microsoft is making some inroads,” said Brandon Cipes, managing director of DevOps at cPrime, an Agile consulting firm in Foster City, Calif. “They’re finally starting to open up compatibility with other things, but they’re years and years behind the ecosystem that’s developed in open source.”

Third-party tools bridge Windows DevOps gaps

Windows DevOps shops have cobbled together automation pipelines with inefficient multi-hop handoffs between Microsoft apps and third-party tools, Cipes said. For many companies, switching over to a Linux-based stack is easier said than done.

“People get on Microsoft and they never leave,” he said. “We have clients that do a lot of Linux, but everyone has at least one department or one corner of the office that’s still on Microsoft and they will openly comment that they’ll never completely remove themselves from it.”

Tools from vendors such as TeamCity, Octopus Deploy, Electric Cloud and CA’s Automic have helped early adopters. One such firm, Urban Science, a data analysis company that specializes in the automotive industry, uses Electric Cloud’s ElectricFlow continuous integration and continuous delivery (CI/CD) tool to automate software delivery in a heavily Windows-based environment.

“Having the orchestration of ElectricFlow allows us to keep one perspective in mind when we’re creating a workflow,” said Marc Priolo, configuration manager at Urban Science, based in Detroit.

Developers and testers have seen what we’ve done in production and they want the same kind of automation.
Aloisio Rochaoperations specialist, NetEnt

ElectricFlow manages DevOps on Windows for about 80% of the company’s IT environment — “we try to use that as one tool to rule them all,” Priolo said. The other 20% of the work mostly involves handoffs from other tools such as Microsoft’s Team Foundation Server (TFS) to ElectricFlow — and here organizational inertia has held back Urban Science, he said.

“The other 20% would mean that our developers would have to change the way they interact with TFS, and it’s just not been a priority for us to change that,” Priolo said.

Occasionally, cPrime’s Windows clients are left with islands of automation when they must integrate third-party DevOps tools with older versions of Microsoft software, Cipes said.

“If you can’t integrate one little bit of automation, it gets you just such a short bit of the way,” he said. “A lot of people are trying to figure out how to deal with getting past that.”

Windows DevOps shops have succeeded in automating infrastructure with tools such as ElectricFlow. NetEnt, an online gaming systems service provider in Sweden, has rolled out ElectricFlow to manage deployments to its production infrastructure even before it automates the rest of the process.

“We’ve tied in all components that are needed to create servers, deploying and upgrading our applications, to give us some more deployment speed and free us up to find other bottlenecks,” said Aloisio Rocha, operations specialist at NetEnt. “We are looking to shift that left now, since the developers and testers have seen what we’ve done in production and they want the same kind of automation.”

Next, NetEnt will use ElectricFlow’s API integration with VMware virtual machines to automate the creation of and updates to SQL Server databases. Such structured apps are a common DevOps challenge regardless of operating system.

“What we’re using right now is PowerShell scripts, so we have a middle hand from ElectricFlow to VMware’s API,” Rocha said. “We would like to skip those PowerShell scripts and write directly to VMware’s API.”

Microsoft products recast the Windows DevOps equation

For other Windows DevOps shops that struggle with islands of automation, the good news is that the most recent versions of Microsoft software are tightly integrated with third-party tools through REST APIs, and also offer more native features.

This year, Windows DevOps products, such as TFS, have improved support for continuous application deployments to production, and some enterprise IT shops have put them to use.

TFS 2015, for example, fell short in that it didn’t have a release pipeline until update 3, but TFS 2017 changed that, said Anthony Terra III, manager of software architecture and development for a law firm in the Philadelphia area.

“We have a full release pipeline now, and we can set it up so that business analysts are the ones that dictate when things get pushed to production,” Terra said. “We do hands-off deployments, and run three or four production deployments a day if we want to, without any issue.”

DevOps shops in the Azure Cloud also have new options in the latest versions of Visual Studio Team Services (VSTS), a SaaS version of TFS that a majority of VSTS users deploy multiple times a day, said Sam Guckenheimer, product owner for VSTS at Microsoft.

“There has been a lot of work in the most recent releases of Windows to make it leaner for server apps so that you could have a small footprint on your VM for Windows, and containerization is another step in that process,” he said.

Microsoft has added features to VSTS in the last six to 12 months to make it the best tool for CI/CD in Azure’s PaaS and IaaS platforms, Guckenheimer said. It has also shored up a workflow that uses Git for code review, testing and quality assurance, and added support for DevOps techniques such as kanban in VSTS. Further updates will facilitate coordination across teams and higher-level views of development teams’ status and assets.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.