Tag Archives: source

Return of Bleichenbacher: ROBOT attack means trouble for TLS

A team of security researchers discovered eight leading vendors and open source projects whose implementations of the Transport Layer Security protocol are vulnerable to the Bleichenbacher oracle attack, a well-known flaw that was first described in 1998.

The Bleichenbacher attack has been referenced in all IETF specifications for the Transport Layer Security (TLS) protocol since version 1.0 in 1999, and implementers of TLS versions through 1.2 were warned to take steps to avoid the Bleichenbacher attack. However, the researchers noted that, based on the ease with which they were able to exploit the vulnerability, it appears that many implementers ignored the warnings.

The attack is named after its discoverer, Daniel Bleichenbacher, a Swiss cryptographer who was working for Bell Laboratories in 1998 when his research on the vulnerability was first published. The TLS protocol, which was meant to replace the Secure Sockets Layer, is widely used for encryption and the authentication of web servers.

The research team  included Hanno Bock, information security researcher; Juraj Somorovsky, research associate at the Horst Görtz Institute for IT Security at the Ruhr-Universität Bochum in Germany; and Craig Young, , computer security researcher with Tripwire’s Vulnerability and Exposures Research Team (VERT). “Perhaps the most surprising fact about our research is that it was very straightforward,” the researchers wrote. “We took a very old and widely known attack and were able to perform it with very minor modifications on current implementations. One might assume that vendors test their TLS stacks for known vulnerabilities. However, as our research shows in the case of Bleichenbacher attacks, several vendors have not done this.”

The researchers said many web hosts are still vulnerable to the ROBOT attack and that nearly a third of the top 100 sites in the Alexa Top 1 Million list are vulnerable. The team identified vulnerable products from F5, Citrix, Radware, Cisco, Erlang, and others, and “demonstrated practical exploitation by signing a message with the private key of facebook.com’s HTTPS certificate.”

The researchers described their work as the “Return Of Bleichenbacher’s Oracle Threat” (ROBOT) and published it in a paper of the same title, as well as on a branded vulnerability website. The team also published a capture the flag contest, posting an encrypted message and challenging the public to decrypt the message using the strategies described in the paper.

TLS protocol designers at fault

The researchers placed the blame for the ease of their exploits squarely on the shoulders of TLS protocol designers. The ROBOT attack is made possible by the behavior of servers implementing TLS using the RSA Public-Key Cryptography Standards (PKCS) #1 v1.5 specification; the issues that enable the Bleichenbacher attack are fixed in later versions of PKCS. TLS 1.3, which is expected to be finalized soon, deprecates the use of PKCS #1 v1.5 and specifies use of PKCS #1 v2.2.

The TLS protocol designers absolutely should have been more proactive about replacing PKCS#1 v1.5.
Craig Youngcomputer security researcher, Tripwire VERT

“The TLS protocol designers absolutely should have been more proactive about replacing PKCS#1 v1.5. There is an unfortunate trend in TLS protocol design to continue using technology after it should have been deprecated,” Young told SearchSecurity by email. He added that vendors also “should have been having their code audited by firms who specialize in breaking cryptography since most software companies do not have in-house expertise for doing so.”

TLS as currently deployed ignores improperly formatted data, and as described in 1999 in RFC 2246. “The TLS Protocol Version 1.0,” the original specification for TLS 1.0, the ROBOT attack “takes advantage of the fact that by failing in different ways, a TLS server can be coerced into revealing whether a particular message, when decrypted, is properly PKCS #1 formatted or not,” the RFC 2246 document states.

The solution proposed in that specification for avoiding “vulnerability to this attack is to treat incorrectly formatted messages in a manner indistinguishable from correctly formatted RSA blocks. Thus, when it receives an incorrectly formatted RSA block, a server should generate a random 48-byte value and proceed using it as the premaster secret. Thus, the server will act identically whether the received RSA block is correctly encoded or not.”

Potential for attacks, detection and remediation

The researchers noted in the paper that the ROBOT flaw could lead to very serious attacks. “For hosts that are vulnerable and only support RSA encryption key exchanges it’s pretty bad. It means an attacker can passively record traffic and later decrypt it,” the team wrote on the ROBOT website, adding that “For hosts that usually use forward secrecy, but still support a vulnerable RSA encryption key exchange the risk depends on how fast an attacker is able to perform the attack. We believe that a server impersonation or man in the middle attack is possible, but it is more challenging.”

Young said that it might be possible to detect attempts to abuse the Bleichenbacher vulnerability, but it would not be easy. “This attack definitely triggers identifiable traffic patterns. Servers would observe a high volume of failed connections as well as a smaller number of connections with successful handshakes and then little to no data on the connection,” he told SearchSecurity. “Unfortunately, I am unaware of anybody actually doing this. Logging the information needed to detect this can be cumbersome and for a site receiving a billion connections a second, it could be quite difficult to notice 10-100 thousand failed connections.”

As for other, ongoing risks, Young said that while “PKCS#1 v1.5 is not being used in TLS 1.3 but it is still used in other systems like XML encryption. Whether or not it can be disabled through configuration is highly application specific.”

AU combines talent analytics with HR management

The use of talent analytics may be creating a need for HR staff with specialized training. One source for these skills is programs that offer master’s degrees in analytics. Another may be a new program at American University that combines analytics with HR management.

American University, or AU, is making talent analytics, which is also called people analytics, a core part of the HR management training in a new master’s degree program, said Robert Stokes, the director of the Master of Science in human resource analytics and management at AU.

Stokes said he believes AU’s master’s degree program is unique, “because metrics and analytics run through all the courses.” He said metrics are a part of that training in talent management, compliance and risk reduction, to name a few HR focus areas.

Programs that offer a master’s degree in analytics are relatively new. The first school to offer this degree was North Carolina State University in 2007. Now, more than two dozen schools offer similar programs. There are colleges that offer talent analytics training, but usually as a course in an HR program.

These master’s programs produce graduates who can meet a broad range of business analytics needs, including talent analytics.

“We definitely have interest from companies in hiring our students for their HR departments,” said Joel Sokol, the director of the Master of Science in analytics program at the Georgia Institute of Technology.  “It’s not the highest-demand business function that our students go into, of course, but it’s certainly on the list,” he said in an email.

Sokol also pointed out that one of the program’s advisory board members is a vice president of HR at AT&T.

Analytics runs through all of HR

The demand for analytics-trained graduates is high. North Carolina State, for instance, said 93% of its master’s students were employed at graduation and earned an average base salary of just over $95,000.

Interest in master’s degree analytics training follows the rise of business analytics. The interest in employing people with quantitative talent analytics skills is part of this trend.

What HR organizations are trying to do is discover “how to drive value from people data,” said David Mallon, the head of research for Bersin by Deloitte, headquartered in New York.

“It wouldn’t shock anybody” if a person from supply chain, IT or marketing “brought a lot of data to the table; it’s just how they get things done,” Mallon said. “But in most organizations, it would be somewhat shocking if the HR person brought data to the conversation,” he said.

Mallon said he is seeing clear traction by HR departments — backed up by its just-released research on people analytics maturity — to deliver better analysis. But he said only about 20% are doing new and different things with analytics.  “They have data scientists, they have analytics teams, [and] they’re using new kinds of technologies to capture data, to model data,” he said.

The march to people analytics

“Conservatively, our data shows that at least 44% of HR organizations have an HR [or] people analytics team of some kind,” Mallon said. The percentage of departments with at least someone responsible for it — even part time — may be as high as 67%, he said.

The AU program’s first class this fall has about 10 students, and Stokes said he expects it to grow as word about the program spreads. Most HR programs that provide analytics training do so under separate courses that may not be integrated with the broader HR training, he said.

The intent is to use analytics and metrics to measure and make better decisions, Stokes said. An organization, for instance, should be able to quantify how much fiscal value is delivered by a training program. This type of people analytics may still be new to many HR organizations, which may rely on surveys to assess the effectiveness of a training program.

Organizations that are more mature aren’t just using surveys to try to determine employee engagement, Mallon said. They may be analyzing what’s going on in internal and external social media.

“They’re mining — they’re watching the interactions of employees in collaboration platforms and on your intranet,” Mallon said. “They’re bringing in performance data from existing business systems like ERPs and CRMs,” he said.

The best-performing organizations are using automation and machine learning to handle the routine reporting to free up time for higher-value research, Mallon said. But they are also using these tools “to spot trends that they didn’t even know were there,” he said.

Microsoft Azure cloud database activity takes off at Connect();

Microsoft plunged deeper into the open source milieu last week, as it expanded support for non-Microsoft software in its Azure cloud database lineup.

Among a host of developer-oriented updates discussed at the Microsoft Connect(); 2017 conference were new ties to the Apache Spark processing engine and Apache Cassandra, one of the top NoSQL databases. The company also added the MariaDB database to open source relational database services available on Azure that already include MySQL and PostgreSQL.

Taken together, the moves are part of an ongoing effort to fill in Microsoft’s cloud data management portfolio on the Azure platform, and to keep up with cloud computing market leader Amazon Web Services (AWS).

A database named MariaDB

Azure cloud database inclusion of MariaDB shows Microsoft’s “deep commitment to supporting data stores that might not necessarily be from Microsoft,” said consultant Ike Ellis, a Microsoft MVP and a partner at independent development house Crafting Bytes in San Diego, Calif.

Databricks CEO Ali Ghodsi at Microsoft Connect
Ali Ghodsi, CEO of Databricks, speaks at last week’s Microsoft Connect conference. Microsoft and Databricks have announced Azure Databricks, new services to expand the use of Spark on Azure.

Such support is important because MariaDB has gained attention in recent years, very much as an alternative to MySQL, which was the original poster child for open source relational databases.

MariaDB is a fork of MySQL, with development overseen primarily by Michael “Monty” Widenius, the MySQL creator who was vocally critical of Oracle’s stewardship of MySQL once it became a part of that company’s database lineup. In recent years, under the direction of Widenius and others, MariaDB has added thread pooling, parallel replication and various query optimizations. Widenius appeared via video at the Connect(); event, which took place in New York and was streamed online, to welcome Microsoft into the MariaDB fold.

Microsoft said it was readying a controlled beta of Azure Database for MariaDB. The company also said it was joining the MariaDB Foundation, the group that formally directs the database’s development.

“MariaDB has a lot of traction,” Ellis said. “Microsoft jumping into MariaDB is going to help its traction even more.”

Cassandra on the cloud

While MariaDB support expands SQL-style data development for Azure, newly announced Cassandra support broadens the NoSQL part of the Azure portfolio, which already included a Gremlin graph database API and a MongoDB API.

In the cloud world, you aren’t selling software; you are selling services.
David Chappellindependent consultant

Unlike MongoDB, which is document-oriented, Apache Cassandra is a key-value store.

Like MongoDB, Cassandra has found considerable use in web and cloud data operations that must quickly shuttle fast arriving data for processing.

Now in preview, Microsoft’s Cassandra API works with Azure Cosmos DB. This is a Swiss army knife-style database — sometimes described as a multimodel database — that the company spawned earlier this year from an offering known as DocumentDB. The Cassandra update fills in an important part of the Azure cloud database picture, according to Ellis.

“With the Cassandra API, Microsoft has hit everything you would want to hit in NoSQL stores,” he said.

Self-service Spark

Microsoft’s latest Spark move sees it working with Databricks, the startup formed by members of the original team that conceived the Spark data processing framework at University of California, Berkeley computer science labs.

These new Spark services stand as an alternative to Apache Spark software already offered as part of Microsoft’s HDInsight product line, which was created together with Hadoop distribution provider Hortonworks.

Known as Azure Databricks, the new services were jointly developed by Databricks and Microsoft and are being offered by Microsoft as a “first-party Azure service,” according to Ali Ghodsi, CEO of San Francisco-based Databricks. Central to the offering is native integration with Azure SQL Data Warehouse, Azure Storage, Azure Cosmos DB and Power BI, he said.

Azure Databricks joins a host of recent cloud-based services appearing across a variety of clouds, mostly intended to simplify self-service big data analytics and machine learning over both structured and unstructured data.

Ghodsi said Databricks’ Spark software has found use in credit card companies doing fraud analytics and in real-time life sciences firms combining large data sets, IoT and other applications.

Taking machine learning mainstream

The Microsoft-Databricks deal described at Connect(); is part of a continuing effort to broaden Azure’s use for machine learning and analytics. Earlier, at its Microsoft Ignite 2017 event, the company showed an Azure Machine Learning Workbench, an Azure Machine Learning Experimentation Service and an Azure Machine Learning Model Management service.

Viewers generally cede overall cloud leadership to AWS, but cloud-based machine learning has become a more competitive area of contention. It is a place where Microsoft may have passed Amazon, according to David Chappell, principal at Chappell and Associates in San Francisco, Calif.

“AWS has a simple environment that is for use by developers. But it is so simple that it is quite constrained,” he said. “It gives you few options.”

The audience for Microsoft’s Azure machine learning efforts, Chappell maintained, will be broader. It spans developers, data scientists and others. “Microsoft is really trying to take machine learning mainstream,” he said.

Economics in the cloud

Microsoft’s broadened open source support is led by this year’s launch of SQL Server on Linux. But that is only part of Microsoft’s newfound open source fervor.

“Some people are skeptical of Microsoft and its commitment to open source, that it is like lip service,” Chappell said. “What they don’t always understand is that cloud computing and its business models change the economics of open source software.

“In the cloud world, you aren’t selling software; you are selling services,” Chappell continued. “Whether it is open source or not, whether it is MariaDB, MySQL or SQL Server — that doesn’t matter, because you are charging customers based on usage of services.”

Azure data services updates are not necessarily based on any newfound altruism or open source evangelism, Chappell cautioned. It’s just, he said, the way things are done in the cloud.

AWS and Microsoft announce Gluon, making deep learning accessible to all developers – News Center

New open source deep learning interface allows developers to more easily and quickly build machine learning models without compromising training performance. Jointly developed reference specification makes it possible for Gluon to work with any deep learning engine; support for Apache MXNet available today and support for Microsoft Cognitive Toolkit coming soon.

SEATTLE and REDMOND, Wash. — Oct. 12, 2017 — On Thursday, Amazon Web Services Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), and Microsoft Corp. (NASDAQ: MSFT) announced a new deep learning library, called Gluon, that allows developers of all skill levels to prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps. The Gluon interface currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release. With the Gluon interface, developers can build machine learning models using a simple Python API and a range of prebuilt, optimized neural network components. This makes it easier for developers of all skill levels to build neural networks using simple, concise code, without sacrificing performance. AWS and Microsoft published Gluon’s reference specification so other deep learning engines can be integrated with the interface. To get started with the Gluon interface, visit https://github.com/gluon-api/gluon-api/.

Developers build neural networks using three components: training data, a model and an algorithm. The algorithm trains the model to understand patterns in the data. Because the volume of data is large and the models and algorithms are complex, training a model often takes days or even weeks. Deep learning engines like Apache MXNet, Microsoft Cognitive Toolkit and TensorFlow have emerged to help optimize and speed the training process. However, these engines require developers to define the models and algorithms up front using lengthy, complex code that is difficult to change. Other deep learning tools make model-building easier, but this simplicity can come at the cost of slower training performance.

The Gluon interface gives developers the best of both worlds — a concise, easy-to-understand programming interface that enables developers to quickly prototype and experiment with neural network models, and a training method that has minimal impact on the speed of the underlying engine. Developers can use the Gluon interface to create neural networks on the fly, and to change their size and shape dynamically. In addition, because the Gluon interface brings together the training algorithm and the neural network model, developers can perform model training one step at a time. This means it is much easier to debug, update and reuse neural networks.

“The potential of machine learning can only be realized if it is accessible to all developers. Today’s reality is that building and training machine learning models require a great deal of heavy lifting and specialized expertise,” said Swami Sivasubramanian, VP of Amazon AI. “We created the Gluon interface so building neural networks and training models can be as easy as building an app. We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”

“We believe it is important for the industry to work together and pool resources to build technology that benefits the broader community,” said Eric Boyd, corporate vice president of Microsoft AI and Research. “This is why Microsoft has collaborated with AWS to create the Gluon interface and enable an open AI ecosystem where developers have freedom of choice. Machine learning has the ability to transform the way we work, interact and communicate. To make this happen we need to put the right tools in the right hands, and the Gluon interface is a step in this direction.”

“FINRA is using deep learning tools to process the vast amount of data we collect in our data lake,” said Saman Michael Far, senior vice president and CTO, FINRA. “We are excited about the new Gluon interface, which makes it easier to leverage the capabilities of Apache MXNet, an open source framework that aligns with FINRA’s strategy of embracing open source and cloud for machine learning on big data.”

“I rarely see software engineering abstraction principles and numerical machine learning playing well together — and something that may look good in a tutorial could be hundreds of lines of code,” said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. “I really appreciate how the Gluon interface is able to keep the code complexity at the same level as the concept; it’s a welcome addition to the machine learning community.”

“The Gluon interface solves the age old problem of having to choose between ease of use and performance, and I know it will resonate with my students,” said Nikolaos Vasiloglou, adjunct professor of Electrical Engineering and Computer Science at Georgia Institute of Technology. “The Gluon interface dramatically accelerates the pace at which students can pick up, apply and innovate on new applications of machine learning. The documentation is great, and I’m looking forward to teaching it as part of my computer science course and in seminars that focus on teaching cutting-edge machine learning concepts across different cities in the U.S.”

“We think the Gluon interface will be an important addition to our machine learning toolkit because it makes it easy to prototype machine learning models,” said Takero Ibuki, senior research engineer at DOCOMO Innovations. “The efficiency and flexibility this interface provides will enable our teams to be more agile and experiment in ways that would have required a prohibitive time investment in the past.”

The Gluon interface is open source and available today in Apache MXNet 0.11, with support for CNTK in an upcoming release. Developers can learn how to get started using Gluon with MXNet by viewing tutorials for both beginners and experts available by visiting https://mxnet.incubator.apache.org/gluon/.

About Amazon Web Services

For 11 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 90 fully featured services for compute, storage, networking, database, analytics, application services, deployment, management, developer, mobile, Internet of Things (IoT), Artificial Intelligence (AI), security, hybrid and enterprise applications, from 44 Availability Zones (AZs) across 16 geographic regions in the U.S., Australia, Brazil, Canada, China, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world — including the fastest-growing startups, largest enterprises, and leading government agencies — to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit https://aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about and follow @AmazonNews.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Windows DevOps shops quickly gain on Linux counterparts

Almost overnight, Windows DevOps has gained ground on the open source world.

Windows shops have a well-earned reputation for conservatism, and a deeply entrenched set of legacy enterprise applications that often hinder automated application development. However, Microsoft products have recently focused on Windows DevOps support. There’s still work to do to underpin Windows container orchestration, but IT pros in Windows DevOps shops are determined to break free of stodgy stereotypes.

Those stereotypes are based in reality. Microsoft shops have been reluctant to deploy early versions of products, and some service providers and consultants that work with Windows-focused enterprises still encounter that history as they contend with DevOps.

In the last three years, Microsoft has lagged behind its open source counterparts in offering DevOps products, particularly for continuous deployment and application release automation, critics said. That lag, plus being locked in to Microsoft tools, is what holds back Windows DevOps.

“Microsoft is making some inroads,” said Brandon Cipes, managing director of DevOps at cPrime, an Agile consulting firm in Foster City, Calif. “They’re finally starting to open up compatibility with other things, but they’re years and years behind the ecosystem that’s developed in open source.”

Third-party tools bridge Windows DevOps gaps

Windows DevOps shops have cobbled together automation pipelines with inefficient multi-hop handoffs between Microsoft apps and third-party tools, Cipes said. For many companies, switching over to a Linux-based stack is easier said than done.

“People get on Microsoft and they never leave,” he said. “We have clients that do a lot of Linux, but everyone has at least one department or one corner of the office that’s still on Microsoft and they will openly comment that they’ll never completely remove themselves from it.”

Tools from vendors such as TeamCity, Octopus Deploy, Electric Cloud and CA’s Automic have helped early adopters. One such firm, Urban Science, a data analysis company that specializes in the automotive industry, uses Electric Cloud’s ElectricFlow continuous integration and continuous delivery (CI/CD) tool to automate software delivery in a heavily Windows-based environment.

“Having the orchestration of ElectricFlow allows us to keep one perspective in mind when we’re creating a workflow,” said Marc Priolo, configuration manager at Urban Science, based in Detroit.

Developers and testers have seen what we’ve done in production and they want the same kind of automation.
Aloisio Rochaoperations specialist, NetEnt

ElectricFlow manages DevOps on Windows for about 80% of the company’s IT environment — “we try to use that as one tool to rule them all,” Priolo said. The other 20% of the work mostly involves handoffs from other tools such as Microsoft’s Team Foundation Server (TFS) to ElectricFlow — and here organizational inertia has held back Urban Science, he said.

“The other 20% would mean that our developers would have to change the way they interact with TFS, and it’s just not been a priority for us to change that,” Priolo said.

Occasionally, cPrime’s Windows clients are left with islands of automation when they must integrate third-party DevOps tools with older versions of Microsoft software, Cipes said.

“If you can’t integrate one little bit of automation, it gets you just such a short bit of the way,” he said. “A lot of people are trying to figure out how to deal with getting past that.”

Windows DevOps shops have succeeded in automating infrastructure with tools such as ElectricFlow. NetEnt, an online gaming systems service provider in Sweden, has rolled out ElectricFlow to manage deployments to its production infrastructure even before it automates the rest of the process.

“We’ve tied in all components that are needed to create servers, deploying and upgrading our applications, to give us some more deployment speed and free us up to find other bottlenecks,” said Aloisio Rocha, operations specialist at NetEnt. “We are looking to shift that left now, since the developers and testers have seen what we’ve done in production and they want the same kind of automation.”

Next, NetEnt will use ElectricFlow’s API integration with VMware virtual machines to automate the creation of and updates to SQL Server databases. Such structured apps are a common DevOps challenge regardless of operating system.

“What we’re using right now is PowerShell scripts, so we have a middle hand from ElectricFlow to VMware’s API,” Rocha said. “We would like to skip those PowerShell scripts and write directly to VMware’s API.”

Microsoft products recast the Windows DevOps equation

For other Windows DevOps shops that struggle with islands of automation, the good news is that the most recent versions of Microsoft software are tightly integrated with third-party tools through REST APIs, and also offer more native features.

This year, Windows DevOps products, such as TFS, have improved support for continuous application deployments to production, and some enterprise IT shops have put them to use.

TFS 2015, for example, fell short in that it didn’t have a release pipeline until update 3, but TFS 2017 changed that, said Anthony Terra III, manager of software architecture and development for a law firm in the Philadelphia area.

“We have a full release pipeline now, and we can set it up so that business analysts are the ones that dictate when things get pushed to production,” Terra said. “We do hands-off deployments, and run three or four production deployments a day if we want to, without any issue.”

DevOps shops in the Azure Cloud also have new options in the latest versions of Visual Studio Team Services (VSTS), a SaaS version of TFS that a majority of VSTS users deploy multiple times a day, said Sam Guckenheimer, product owner for VSTS at Microsoft.

“There has been a lot of work in the most recent releases of Windows to make it leaner for server apps so that you could have a small footprint on your VM for Windows, and containerization is another step in that process,” he said.

Microsoft has added features to VSTS in the last six to 12 months to make it the best tool for CI/CD in Azure’s PaaS and IaaS platforms, Guckenheimer said. It has also shored up a workflow that uses Git for code review, testing and quality assurance, and added support for DevOps techniques such as kanban in VSTS. Further updates will facilitate coordination across teams and higher-level views of development teams’ status and assets.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.