Tag Archives: primary

Windows IIS server hardening checklist

Default configurations for most OSes are not designed with security as the primary focus. Rather, they concentrate…

on ease of setup, use and communications. Therefore, web servers running default configurations are obvious targets for automated attacks and can be quickly compromised.

Device hardening is the process of enhancing web server security through a variety of measures to minimize its attack surface and eliminate as many security risks as possible in order to achieve a much more secure OS environment.

Because web servers are constantly attached to the internet and often act as gateways to an organization’s critical data and services, it is essential to ensure they are hardened before being put into production.

Consult this server hardening checklist to ensure server hardening policies are correctly implemented for your organization’s Windows Internet Information Services (IIS) server.

General

  • Never connect an IIS server to the internet until it is fully hardened.
  • Place the server in a physically secure location.
  • Do not install the IIS server on a domain controller.
  • Do not install a printer.
  • Use two network interfaces in the server: one for admin and one for the network.
  • Install service packs, patches and hot fixes.
  • Run Microsoft Security Compliance Toolkit.
  • Run IIS Lockdown on the server.
  • Install and configure URLScan.
  • Secure remote administration of the server, and configure for encryption, low session timeouts and account lockouts.
  • Disable unnecessary Windows services.
  • Ensure services are running with least-privileged accounts.
  • Disable FTP, Simple Mail Transfer Protocol and Network News Transfer Protocol services if they are not required.
  • Disable Telnet service.
  • Disable ASP.NET state service if not used by your applications.
  • Disable Web Distributed Authoring and Versioning if not used by the application, or secure it if it is required.
  • Do not install Microsoft Data Access Components (MDAC) unless specifically needed.
  • Do not install the HTML version of Internet Services Manager.
  • Do not install Microsoft Index Server unless required.
  • Do not install Microsoft FrontPage Server Extensions (FPSE) unless required.
  • Harden the TCP/IP stack.
  • Disable NetBIOS and Server Message Block — closing ports 137, 138, 139 and 445.
  • Reconfigure recycle bin and page file system data policies.
  • Secure CMOS (complementary metal-oxide semiconductor) settings.
  • Secure physical media — CD-ROM drive and so on.

Accounts

  • Remove unused accounts from the server.
  • Disable Windows Guest account.
  • Rename Administrator account, and set a strong password.
  • Disable IUSR_Machine account if it is not used by the application.
  • Create a custom least-privileged anonymous account if applications require anonymous access.
  • Do not give the anonymous account write access to web content directories or allow it to execute command-line tools.
  • If you host multiple web applications, configure a separate anonymous user account for each one.
  • Configure ASP.NET process account for least privilege. This only applies if you are not using the default ASP.NET account, which is a least-privileged account.
  • Enforce strong account and password policies for the server.
  • Enforce two-factor authentication where possible.
  • Restrict remote logons. (The “access this computer from the network” user right is removed from the Everyone group.)
  • Do not share accounts among administrators.
  • Disable null sessions (anonymous logons).
  • Require approval for account delegation.
  • Do not allow users and administrators to share accounts.
  • Do not create more than two accounts in the administrator group.
  • Require administrators to log on locally, or secure the remote administration system.

Files and directories

  • Use multiple disks or partition volumes, and do not install the web server home directory on the same volume as the OS folders.
  • Contain files and directories on NT file system (NTFS) volumes.
  • Put website content on a nonsystem NTFS volume.
  • Create a new site, and disable the default site.
  • Put log files on a nonsystem NTFS volume but not on the same volume where the website content resides.
  • Restrict the Everyone group — no access to WINNTsystem32 or web directories.
  • Ensure website root directory has deny write access control entry (ACE) for anonymous internet accounts.
  • Ensure content directories have deny write ACE for anonymous internet accounts.
  • Remove resource kit tools, utilities and SDKs.
  • Remove any sample applications or code.
  • Remove IP address in header for Content-Location.

Shares

  • Remove all unnecessary shares, including default administration shares.
  • Restrict access to required shares — the Everyone group does not have access.
  • Remove administrative shares — C$ and Admin$ — if they are not required. (Microsoft System Center Operations Manager — formerly Microsoft Systems Management Server and Microsoft Operations Manager — requires these shares.)

Ports

  • Restrict internet-facing interfaces to port 443 (SSL).
  • Run IIS Lockdown Wizard on the server.

Registry

  • Restrict remote registry access.
  • Secure the local Security Account Manager (SAM) database by implementing the NoLMHash Policy.

Auditing and logging

  • Audit failed logon attempts.
  • Relocate and secure IIS log files.
  • Configure log files with an appropriate file size depending on the application security requirement.
  • Regularly archive and analyze log files.
  • Audit access to the MetaBase.xml and MBSchema.xml files.
  • Configure IIS for World Wide Web Consortium extended log file format auditing.
  • Read how to use SQL Server to analyze web logs here.

Sites and virtual directories

  • Put websites on a nonsystem partition.
  • Disable Parent Paths setting.
  • Remove any unnecessary virtual directories.
  • Remove or secure MDAC Remote Data Services virtual directory.
  • Do not grant included directories read web permission.
  • Restrict write and execute web permissions for anonymous accounts in virtual directories.
  • Ensure there is script source access only on folders that support content authoring.
  • Ensure there is write access only on folders that support content authoring and these folders are configured for authentication and SSL encryption.
  • Remove FPSE if not used. If FPSE are used, update and restrict access to them.
  • Remove the IIS Internet Printing virtual directory.

Script mappings

  • Map extensions not used by the application to 404.dll — .idq, .htw, .ida, .shtml, .shtm, .stm, idc, .htr, .printer.
  • Map unnecessary ASP.NET file type extensions to HttpForbiddenHandler in Machine.config.

ISAPI filters

IIS Metabase

  • Restrict access to the metabase by using NTFS permissions (%systemroot%system32inetsrvmetabase.bin).
  • Restrict IIS banner information (disable IP address in content location).

Server certificates

  • Ensure certificate date ranges are valid.
  • Only use certificates for their intended purpose. For example, the server certificate is not used for email.
  • Ensure the certificate’s public key is valid, all the way to a trusted root authority.
  • Confirm that the certificate has not been revoked.

Machine.config

  • Map protected resources to HttpForbiddenHandler.
  • Remove unused HttpModules.
  • Disable tracing: <trace enable=”false”/>.
  • Turn off debug compiles: <compilation debug=”false” explicit=”true” defaultLanguage=”vb”>.

Dig Deeper on Microsoft Windows security

Go to Original Article
Author:

HCI storage adoption rises as array sales slip

The value and volume of data keep growing, yet in 2019 most primary storage vendors reported a drop in sales.

Part of that has to do with companies moving data to the cloud. It is also being redistributed on premises, moving from traditional storage arrays to hyper-converged infrastructure (HCI) and data protection products that have expanded into data management.

That helps explain why Dell Technologies bucked the trend of storage revenue declines last quarter. A close look at Dell’s results shows its gains came from areas outside of traditional primary storage arrays that have been flat or down from its rivals.

Dell’s storage revenue of $4.15 billion for the quarter grew 7% over last year, but much of Dell’s storage growth came from HCI and data protection. According to Dell COO Jeff Clarke, orders of VxRail HCI storage appliances increased 82% over the same quarter in 2018. Clarke said new Data Domain products also grew significantly, although Dell provided no revenue figures for backup.

Hyper-converged products combine storage, servers and virtualization in one box. VxRail, which relies on vSAN software from Dell-owned VMware running on Dell PowerEdge, appears to be cutting in on sales of both independent servers and storage. Dell server revenue declined around 10% year-over-year, around the same as rival Hewlett Packard Enterprise’s (HPE) server decline.

“We’re in this data era,” Clarke said on Dell’s earnings call last week. “The amount of data created is not slowing. It’s got to be stored, which is probably why we are seeing a slightly different trend from the compute side to the storage side. But I would point to VxRail hyper-convergence, where we’ll bring computing and storage together, helping customers build on-prem private clouds.”

The amount of data created is not slowing. It’s got to be stored.
Jeff ClarkeCOO, Dell

Dell is counting on a new midrange storage array platform to push storage revenue in 2020. Clarke said he expected those systems to start shipping by the end of January.

Dell’s largest storage rivals have reported a pause in spending, partially because of global conditions such as trade wars and tariffs. NetApp revenues have fallen year-over-year each of the last three quarters, including a 9.6% dip to $1.38 billion last quarter. HPE said its storage revenue of $848 million dropped 12% from last year. HPE’s Nimble Storage midrange array platform grew 2% and Simplivity HCI increased 14% year-over-year, a sign that 3PAR enterprise arrays fell and the vendor’s new Primera flagship arrays have not yet generated meaningful sales.

Jeff Clarke, Dell COO
Dell Technologies COO Jeff Clarke

IBM storage has also declined throughout the year, dropping 4% year-over-year to $434 million last quarter. Pure Storage’s revenue of $428 million last quarter increased 16% from last year, but Pure had consistently grown revenue at significantly higher rates throughout its history.

Meanwhile, HCI storage revenue is picking up. Nutanix last week reported a leveling of revenue following a rocky start to 2019. Related to VxRail’s increase, VMware said its vSAN license bookings had increased 35%. HPE’s HCI sales grew, while overall storage dropped. Cisco did not disclose revenue for its HyperFlex HCI platform, but CEO Chuck Robbins called it out for significant growth last quarter.

Dell/VMware and Nutanix still combine for most of the HCI storage market. Nutanix’s revenue ($314.8 million) and subscription ($380.0 million) results were better than expected last quarter, although both numbers were around the same as a year ago. It’s hard to accurately measure Nutanix’s growth from 2018 because the vendor switched to subscription billing. But Nutanix added 780 customers and its 66 deals of more than $1 million were its most ever. And the total value of its customer contracts came to $305 million, up 9% from a year ago.

Nutanix’s revenue shift came after the company switched to a software-centric model. It no longer records revenue from the servers it ships its software on. Nutanix and VMware are the dominant HCI software vendors.

“It’s just the two of us, us and VMware,” Nutanix CEO Dheeraj Pandey said in an interview after his company’s earnings call. “Hyper-convergence now is really driven by software as opposed to hardware. I think it was a battle that we had to win over the last three or four years, and the dust has finally settled and people see it’s really an operating system play. We’re making it all darn simple to operate.”

Go to Original Article
Author:

How to Create and Manage Hot/Cold Tiered Storage

When I was working in Microsoft’s File Services team around 2010, one of the primary goals of the organization was to commoditize storage and make it more affordable to enterprises. Legacy storage vendors offered expensive products, often consuming a majority of the budget of the IT department and they were slow to make improvements because customers were locked in. Since then, every release of Windows Server has included storage management features which were previously only provided by storage vendors, such as deduplication, replication, and mirroring. These features could be used to manage commodity storage arrays and disks, reducing costs and eliminating vendor lock-in. Windows Server now offers a much-requested feature, the ability to move files between different tiers of “hot” (fast) storage and “cold” (slow) storage.

Managing hot/cold storage is conceptually similar to computer memory cache but at an enterprise scale. Files which are frequently accessed can be optimized to run on the hot storage, such as faster SSDs. Meanwhile, files which are infrequently accessed will be pushed to cold storage, such as older or cheaper disks. These lower priority files will also take advantage of file compression techniques like data deduplication to maximize storage capacity and minimize cost. Identical or varying disk types can be used because the storage is managed as a pool using Windows Server’s storage spaces, so you do not need to worry about managing individual drives. The file placement is controlled by the Resilient File System (ReFS), a file system which is used to optimize and rotate data between the “hot” and “cold” storage tiers in real-time based on their usage. However, using tiered storage is only recommended for workloads that are not regularly accessed. If you have permanently running VMs or you are using all the files on a given disk, there would be little benefit in allocating some of the disk to cold storage. This blog post will review the key components required to deploy tiered storage in your datacenter.

Overview of Resilient File System (ReFS) with Storage Tiering

The Resilient File System was first introduced in Windows Server 2012 with support for limited scenarios, but it has been greatly enhanced through the Windows Server 2019 release. It was designed to be efficient, support multiple workloads, avoid corruption and maximize data availability. More specifically to tiering though, ReFS divides the pool of storage into two tiers automatically, one for high-speed performance and one of maximizing storage capacity. The performance tier receives all the writes on the faster disk for better performance. If those new blocks of data are not frequently accessed, the files will gradually be moved to the capacity tier. Reads will usually happen from the capacity tier, but can also happen from the performance tier as needed.

Storage Spaces Direct and Mirror-Accelerated Parity

Storage Spaces Direct (S2D) is one of Microsoft’s enhancements designed to reduce costs by allowing servers with Direct Attached Storage (DAS) drives to support Windows Server Failover Clustering. Previously, highly-available file server clusters required some type of shared storage on a SAN or used an SMB file share, but S2D allows for small local clusters which can mirror the data between nodes. Check out Altaro’s blog on Storage Spaces Direct for in-depth coverage on this technology.

With Windows Server 2016 and 2019, S2D offers mirror-accelerated parity which is used for tiered storage, but it is generally recommended for backups and less frequently accessed files, rather than heavy production workloads such as VMs. In order to use tiered storage with ReFS, you will use mirror-accelerated parity. This provides decent storage capacity by using both mirroring and a parity drive to help prevent and recover from data loss. In the past, mirroring and parity would conflict and you would usually have to select one of the other.  Mirror-accelerator parity works with ReFS by taking writes and mirroring them (hot storage), then using parity to optimize their storage on disk (cold storage). By switching between these storage optimizations techniques, ReFS provides admins with the best of both worlds.

Creating Hot and Cold Tiered Storage

When configuring hot and cold storage you get to define the ratio of the hot and cold storage. For most workloads, Microsoft recommends allocating 20% to hot and 80% to cold. If you are using high-performance workloads, consider having more hot memory to support more writes. On the flip-side, if you have a lot of archival files, then allocate more cold memory. Remember that with a storage pool you can combine multiple disk types under the same abstracted storage space. The following PowerShell cmdlets show you how to configure a 1,000 GB disk to use 20% (200 GB) for performance (hot storage) and 80% (800 GB) for capacity (cold storage).

Managing Hot and Cold Tiered Storage

If you want to increase the performance of your disk, then you will allocate a great percentage of the disk to the performance (hot) tier. In the following example we use the PowerShell cmdlets to create a 30:70 ratio between the tiers:

Unfortunately, this resizing only changes the ratios of the disks but does not change the size of the partition or volume, so you likely also want to change these using the Resize-Volumes cmdlets.

Optimizing Hot and Cold Storage

Based on the types of workloads you are using, you may wish to further optimize when data is moved between hot and cold storage, which is known as the “aggressiveness” of the rotation. By default, the hot storage will use wait until 85% of its capacity is full before it begins to send data to the cold storage. If you have a lot of write traffic going to the hot storage then you want to reduce this value so that performance-tier data gets pushed to the cold storage quicker. If you have fewer write requests and want to keep data in hot storage longer then you can increase this value. Since this is an advanced configuration option, it must be configured via the registry on every node in the S2D cluster, and it also requires a restart. Here is a sample script to run on each node if you want to change the aggressiveness so that it swaps files when the performance tier reaches 70% capacity:

You can apply this setting cluster-wide by using the following cmdlet:

NOTE: If this is applied to an active cluster, make sure that you reboot one node at a time to maintain service availability.

Wrap-Up

Now you should be fully equipped with the knowledge to optimize your commodity storage using the latest Windows Server storage management features. You can pool your disks with storage spaces, use storage spaces direct (S2D) to eliminate the need for a SAN, and ReFS to optimize the performance and capacity of these drives.  By understanding the tradeoffs between performance and capacity, your organization can significantly save on storage management and hardware costs. Windows Server has made it easy to centralize and optimize your storage so you can reallocate your budget to a new project – or to your wages!

What about you? Have you tried any of the features listed in the article? Have they worked well for you? Have they not worked well? Why or why not? Let us know in the comments section below!


Go to Original Article
Author: Symon Perriman

Spectra Logic launches stand-alone storage management software

Spectra Logic’s new storage management software can search through high-cost primary storage for inactive data and move it to a lower-cost tier, regardless of who makes the hardware.

StorCycle was released today, and unlike the operational software for Spectra Logic’s tape, object and disk storage appliances, it is completely stand-alone. It is installed on a virtual machine or dedicated server that sits between the primary storage tier and a perpetual storage tier, where it migrates inactive data from the former to the latter.

Spectra Logic separates storage into two tiers. The primary tier consists of fast, high-cost storage like flash and high-performance disk, while the perpetual storage tier consists of slower, low-cost storage like tape, object storage, network-attached storage and the public cloud. Moving infrequently used data out of the primary tier saves money, and StorCycle is designed to streamline that migration.

“StorCycle is the glue that ties these two tiers together,” said David Feller, vice president of product management and solutions engineering at Spectra Logic.

Automated data tiering is not new. Cloud file system startup Elastifile, which was bought by Google earlier this year, supports tiering to on-premises bare-metal servers, AWS and Google Cloud Platform (GCP). NetApp Cloud Volumes OnTap and Hitachi Vantara have similar storage optimization capabilities for hybrid environments, and Druva recently introduced a capability to optimize storage among different AWS tiers.

StorCycle is the glue that ties these two tiers together.
David FellerVice president of product management and solutions engineering, Spectra Logic

Spectra Logic’s offering stands out in that it is a very simple and complete product, said Mark Peters, principal analyst and practice director at IT analyst, research and validation firm Enterprise Strategy Group. Peters said while StorCycle is standalone, the fact that a customer can buy it with compatible secondary storage hardware spanning disk, object store and tape is a distinct advantage.

StorCycle’s most notable feature, though, is how it places an HTML link in the file’s original location pointing to where the archived file has been moved to. Peters said this helps prevent systems from timing out when trying to retrieve data from a high-latency source, such as tape or public cloud. More importantly, he said this simplifies the recall and recovery process for users, as they can access their archived data from where it was originally.

Peters said organizations need to be careful when evaluating storage management software, as it’s possible they could increase their overall costs rather than saving money. StorCycle needs to be installed on a VM or dedicated server, which is a relatively minimal expense, but some products may call for additional primary storage. He encouraged buyers to “do their homework,” and ensure that their storage management software costs don’t exceed their savings.

Peters also said some organizations develop automated data tiering in-house, which is potentially cheaper than buying third-party storage management software. However, this introduces a layer of complexity and could create a legacy problem down the line.

“They can potentially run into trouble if the person who developed the application leaves,” Peters said.

screenshot of StorCycle
StorCycle moves inactive data to colder storage tiers and replaces the files with an HTML link for easy access and restore.

StorCycle can automatically detect inactive data and migrate it, but it also includes a feature for “project-based” migrations. Users can tag data sets to be moved to the perpetual tier, and StorCycle will perpetually move new data with that tag out of primary. When data generation for the project is complete, all of its related data is in the same place in the perpetual tier, ready for further analysis. Feller said ideal use cases for this include sensor-based and machine-gathered data, such as seismology studies or autonomous car research.

StorCycle enters beta today, with general availability slated for November 2019. The initial release will support AWS and Wasabi cloud, with support for Microsoft Azure and GCP in the works. The first release will need to be installed on VMs or servers running Windows systems, but support for Linux is being planned.

Spectra Logic did not have price details for StorCycle but stated it will be available both as a perpetual license and as an annual subscription license.

Go to Original Article
Author:

Cloudera open source route seeks to keep big data alive

Cloudera has had a busy 2019. The vendor started off the year by merging with its primary rival Hortonworks to create a new Hadoop big data juggernaut. However, in the ensuing months, the newly merged company has faced challenges as revenue has come under pressure and the Hadoop market overall has shown signs of weakness.

Against that backdrop, Cloudera said July 10 that it would be changing its licensing model, taking a fully open source approach. The Cloudera open source route is a new strategy for the vendor. In the past, Cloudera had supported and contributed to open source projects as part of the larger Hadoop ecosystem but had kept its high-end product portfolio under commercial licenses.

The new open source approach is an attempt to emulate the success that enterprise Linux vendor Red Hat has achieved with its open source model. Red Hat was acquired by IBM for $34 billion in a deal that closed in July. In the Red Hat model, the code is all free and organizations pay a subscription fee for support services.

New subscription model

Under the new model the plan is that starting in September 2019, Cloudera will require users to buy a subscription agreement to access binaries and the Cloudera-hosted source they are built from, for all new versions and maintenance releases of supported products. Not all of the vendor’s products are open source today, but they all will be by February of 2020, if everything goes according to the Cloudera open source plan.

Over the past several weeks, Cloudera has briefed customers, partners and analysts on the new licensing model and the feedback thus far has been largely positive, said David Moxey, VP of product marketing at Cloudera.

“We believe that reflects market understanding and acceptance of the Red Hat model which we have chosen to emulate,” Moxey said. “Influential analysts have been positive and have helped clients and colleagues accurately understand the change and rationale.”

Open source path a departure

The shift to a completely open source model is a departure from earlier comments the company made about maintaining both open source and some proprietary tooling, said James Curtis, analyst at 451 Research. Overall, Curtis said he sees the Cloudera open source move settling a number of things for the vendor.

 “Trying to drive both a hybrid and OS [open source] strategy would have had it challenges,” Curtis said. “Now the company can move forward and concentrate on its products and services.”

It makes sense to maintain that strategy not only from the standpoint of keeping faith with former Hortonworks customers, but also to trade on the momentum that open source software is seeing in the enterprise.
Doug Henschen Analyst, Constellation Research

Doug Henschen, an analyst at Constellation Research, said he wasn’t entirely surprised by Cloudera’s all open source commitment as it’s consistent with Hortonworks’ strategy before the merger with Cloudera.

“It makes sense to maintain that strategy not only from the standpoint of keeping faith with former Hortonworks customers, but also to trade on the momentum that open source software is seeing in the enterprise,” Henschen said.

Also, now that it’s apparent that cloud services — in particular AWS EMR (Amazon Elastic MapReduce) — are cutting into Cloudera’s business, it makes sense to back the hybrid- and multi-cloud messaging and differentiation from cloud services with a purely open source approach, Henschen added. Looking forward, Henschen said he sees Cloudera continuing to emphasize its hybrid- and multi-cloud story by continuing to develop and deliver its Altus Cloud services.

“Cloud services are clearly winning in the market because they offer both elasticity and minimal administration, two traits that save customers money,” Henschen said. “What wouldn’t be surprising is seeing new Altus services options, perhaps including pared down, lower-cost options featuring Spark, object storage and fewer, if any, Hadoop components.”

Hadoop’s prospects uncertain

At the core of Cloudera’s business is Hadoop, a technology and a market that is arguably in retreat as organizations choose different options for handling large data sets and big data.

While the financial fortunes of Cloudera and other Hadoop vendors, notably MapR, have been under pressure, Curtis noted that there is still demand for Hadoop and related distributed data processing frameworks. The market is shifting and settling, however.

Cloudera Palo Alto headquarters, Workload XM
Cloudera Palo Alto headquarters (on screen is the Workload XM)

“What is happening is that where enterprises are doing their analytics is changing and specifically to the cloud,” he said. “Cloudera was late to fully adopt the cloud, but it wasn’t completely behind either.”

Curtis added that while the Cloudera open source route still leaves the vendor with catchup to do, he doesn’t expect Hadoop to go away completely as there is still a demand for that type of processing capability.

But big data market is vibrant

As for Henschen, he said he doesn’t think of it as ‘the Hadoop market’ so much as ‘the big data market,’ and that’s not going away.

“Companies are continuing to harness big data to understand their businesses and to spot new opportunities,” he said. “What is clear is that companies are moving away from complexity, if they can avoid it.”

The demand to reduce complexity has forced changes in Hadoop software and vendors associated with Hadoop. For example, Henschen noted that Cloudera embraced Apache Spark as early as 2017, with Cloudera executives making the point that they were behind more Spark software deployments than any other vendor. Indeed, as part of the company’s recent open source announcements, Henschen said that Cloudera executives emphasized that Cloudera plans to invest in Spark, Kubernetes and Kafka.

Henschen emphasized that it’s important to note that AWS, Microsoft, Google and other cloud vendors still offer Hadoop services because that software addresses certain big data needs well.

Also, there are thousands of on-premises Hadoop deployments that are disappearing right away, Henschen pointed out.

The installed base of deployments is a legacy that the vendor can build on as the Cloudera open source strategy unfolds, he said. But the big question, according to Henschen, is whether the vendor can succeed in offering the right mix of software and services that will address today’s big data needs, appeal to customers and drive growth.

“Hadoop helped usher in the big data era, but today there are more choices and combinations of software and services that companies can use to address their big data needs,” he said.

Go to Original Article
Author:

Microsoft seeks broader developer appeal with Azure DevOps

Microsoft has rebranded its primary DevOps platform as Azure DevOps to reach beyond Windows developers or Visual Studio developers and appeal to those who just want a solid DevOps platform.

Azure DevOps encompasses five services that span the breadth of the development lifecycle. The services aim to help developers plan, build, test, deploy and collaborate to ship software faster and with higher quality. These services include the following:

  • Azure Pipelines is a CI/CD service.
  • Azure Repos offers source code hosting with version control.
  • Azure Boards provides project management with support for Agile development using Kanban boards and bug tracking.
  • Azure Artifacts is a package management system to store artifacts.
  • Azure Test Plans lets developers define, organize, and run test cases and report any issues through Azure Boards.

Microsoft customers wanted the company to break up the Visual Studio Team Services (VSTS) platform so they could choose individual services, said Jamie Cool, Microsoft’s program manager for Azure DevOps. By doing so, the company also hopes to attract a wider audience that includes Mac and Linux developers, as well as open source developers in general, who avoid Visual Studio, Microsoft’s flagship development tool set.

Open source software continues to achieve broad acceptance within the software industry. However, many developers don’t want to switch to Git source control and stay with VSTS for everything else. Over the past few years, Microsoft has technically separated some of its developer tool functions.

But the company has struggled to convince developers about Microsoft’s cross-platform capabilities and that they can pick and choose areas from Microsoft versus elsewhere, said Rockford Lhotka, CTO of Magenic, an IT services company in St. Louis Park, Minn.

Rockford Lhotka, CTO, MagenicRockford Lhotka

“The idea of a single vendor or single platform developer is probably gone at this point,” he said. “A Microsoft developer may use ASP.NET, but must also use JavaScript, Angular and a host of non-Microsoft tools, as well. Similarly, a Java developer may well be building the back-end services to support a Xamarin mobile app.”

Most developers build for a lot of different platforms and use a lot of different development languages and tools. However, the features of Azure DevOps will work for everyone, Lhotka said.

Azure DevOps is Microsoft’s latest embrace of open source development, from participation in open source development to integrating tools and languages outside its own ecosystem, said Mike Saccotelli, director of modern apps at SPR, a digital technology consulting firm in Chicago.

In addition to the rebranded Azure DevOps platform, Microsoft also plans to provide free CI/CD technology for any open source project, including unlimited compute on Azure, with the ability to run up to 10 jobs concurrently, Cool said. Microsoft has also made Azure Pipelines the first of the Azure DevOps services to be available on the GitHub Marketplace.

Bringing Device Support to Windows Server Containers

When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has gone on, we’ve heard feedback from our users about how certain workloads need access to peripheral devices—a problem when you try to wrap those workloads in a container. We’re introducing support for select host device access from Windows Server containers, beginning in Insider Build 17735 (see table below).

We’ve contributed these changes back to the Open Containers Initiative (OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video).

What’s Happening

To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM’s virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM’s virtual COM port (this is the powershell console on the left).

As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn’t exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host.

How It Works

Let’s look at how it will work in Docker. From a shell, a user will type:

docker run --device="/"

For example, if you wanted to pass a COM port to your container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windowsservercore-insider:latest

The value we’re passing to the device argument is simple: it looks for an IdType and an Id. For this coming release of Windows , we only support an IdType of “class”. For Id, this is  a device interface class GUID. The values are delimited by a slash, “/”.  Whereas  in Linux a user assigns individual devices by specifying a file path in the “/dev/” namespace, in Windows we’re adding support for a user to specify an interface class, and all devices which identify as implementing this class   will be plumbed into the container.

If a user wants to specify multiple classes to assign to a container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" mcr.microsoft.com/windowsservercore-insider:latest

What are the Limitations?

Process isolation only: We only support passing devices to containers running in process isolation; Hyper-V isolation is not supported, nor do we support host device access for Linux Containers on Windows (LCOW).

We support a distinct list of devices: In this release, we targeted enabling a specific set of features and a specific set of host device classes. We’re starting with simple buses. The complete list that we currently support  is  below.

Device Type Interface Class  GUID
GPIO 916EF1CB-8426-468D-A6F7-9AE8076881B3
I2C Bus A11EE3C6-8421-4202-A3E7-B91FF90188E4
COM Port 86E0D1E0-8089-11D0-9CE4-08003E301F73
SPI Bus DCDE6AF9-6610-4285-828F-CAAF78C424CC

Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows to add this support.

What’s Next?

We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me.

Cheers,

Craig Wilhite (@CraigWilhite)