Tag Archives: help

HashiCorp Nomad vs. Kubernetes matchup intensifies with 0.11

A HashiCorp Nomad beta release this week could help it encroach on Kubernetes’ territory with advanced IT automation for legacy applications and a simpler approach to container orchestration.

Hashicorp first released the open source workload orchestrator in 2015, a year after Kubernetes arrived in the market. But since then, Kubernetes has become the industry-standard container orchestrator, while Nomad Enterprise is HashiCorp’s least-used commercial product in a portfolio that also includes Terraform infrastructure as code, Vault secrets management and Consul service discovery.

These products are also commonly used in Kubernetes environments, and HashiCorp officials typically prefer to frame Nomad as complementary to Kubernetes, rather than a competitor. In the past, HashiCorp’s documentation has pointed out that past versions of Nomad orchestrated only compute resources, scheduling workloads on separately managed underlying resources. This made for a simpler but less complete approach to workload automation, as previous versions of Nomad did not handle networking and storage for application clusters, as Kubernetes does.

However, with version 0.11, released in beta this week, HashiCorp Nomad’s storage features draw closer to those offered by Kubernetes. The new capabilities include support for shared storage volumes through the open source Container Storage Interface (CSI), a set of APIs supported by most major storage vendors. CSI is most commonly used with Kubernetes, but any CSI plugins written to work with Kubernetes will also work with HashiCorp Nomad as of version 0.11.

HashiCorp Nomad version 0.11 also introduces horizontal application autoscaling capabilities, as well as support for task dependencies in cases where application components must be deployed in a certain order on a container cluster.

“[Nomad] can still coexist with Kubernetes, especially for legacy applications when customers prefer to use Kubernetes for containers,” said Amith Nair, VP of product marketing at HashiCorp. “But the [new] features make it a more direct comparison, and we’re starting to see increased usage on the open source side, where some customers are downloading it to replace Kubernetes.”

In the last six months, open source downloads of HashiCorp Nomad have doubled each month to reach 20,000 per month, Nair said. A hosted Nomad cloud service also remains on the company’s long-term roadmap, which would likely compete with the many hosted Kubernetes services available.

HashiCorp Nomad seeks app modernization niche

Most of HashiCorp Nomad’s workload orchestration features can be used to modernize legacy applications that run on VMs. Nomad’s scheduler, when used with Consul service discovery, can optimize how applications on VMs and containers use underlying resources. With version 0.11’s CSI support, HashiCorp Nomad can perform non-disruptive rolling updates of both container-based and VM-based applications.

Such features may put HashiCorp Nomad in closer competition with IT vendors such as VMware, which offers Kubernetes container orchestration alongside VM management. HashiCorp has an uphill battle in that market as well, given VMware’s ubiquity in enterprise shops. But as with Kubernetes, HashiCorp Nomad could capture some attention from IT pros because of its simplicity, analysts said.

Nomad can infiltrate the same market as VMware’s Project Pacific and Tanzu with a low-cost alternative for users that want to manage traditional workloads and cloud-native workloads with one entity.
Roy IllsleyAnalyst, Omdia

“Nomad can infiltrate the same market as VMware’s Project Pacific and Tanzu with a low-cost alternative for users that want to manage traditional workloads and cloud-native workloads with one entity,” said Roy Illsley, analyst at Omdia, a technology market research firm in London. “The challenge is that HashiCorp hasn’t been great at marketing — tech people know it, but tech people don’t necessarily sign the checks.”

With a recent $175 million funding infusion for HashiCorp, however, that could change, and HashiCorp could play a role similar to Linkerd, a service mesh rival to Google and IBM’s Istio that has held its own in the enterprise because many consider it easier to setup and use.

HashiCorp Nomad vs. Kubernetes pros and cons

Two HashiCorp users published blog posts last year detailing their decision to deploy Nomad over Kubernetes. The on-premises IT team at hotel search site Trivago moved its IT monitoring workloads to the public cloud using Nomad in early 2019. Trivago’s IT staff already had experience with HashiCorp’s tool and found Kubernetes more complex than was necessary for its purposes.

“The additional functionality that Kubernetes had to offer was not worth the extra efforts and human resources required to keep it running,” wrote Inga Feick, a DevOps engineer at Trivago, based in Dusseldorf, Germany. “Remote cloud solutions like a managed Kubernetes cluster or [Amazon ECS] are not an option for our I/O-intense jobs either.”

Another freelance developer cited Nomad’s simplicity in a November 2019 post about porting a project to Nomad from Kubernetes.

“Kubernetes is getting all the visibility for good reasons, but it’s probably not suitable for small to medium companies,” wrote Fabrice Aneche, a software engineering consultant based in Quebec. “You don’t need to deploy Google infrastructure when you are not Google.”

Both blog posts noted significant downsides to HashiCorp Nomad vs Kubernetes at the time, however.

“Nomad is one binary, but the truth is Nomad is almost useless without Consul,” Aneche noted in his post. This adds some complexity to HashiCorp Nomad for production use, since users are required to use Consul’s template language to track changes to the Nomad environment. Version 0.11 adds more detailed insights and alerts to a Nomad remote execution UI to make service management easier. Aneche did not respond to requests for comment about the version 0.11 release this week.

Meanwhile, Trivago’s Feick noted the lack of support for autoscaling in January 2019 made HashiCorp Nomad cumbersome to manage at times.

“You need to specify the resource requirements per job,” she wrote. “Give a job too much CPU and memory and Nomad cannot allocate any, or at least not many, other jobs on the same host. Give it not enough memory and you might find it dying… It would be neat if Nomad had a way of calculating those resource needs on its own. One can dream.” Feick didn’t respond to requests for additional comment this week.

HashiCorp Nomad version 0.11 takes the first step toward full autoscaling support with horizontal application autoscaling, or the ability to provide applications with cluster resources dynamically without manual intervention, a company spokesperson said.

Subsequent releases will support horizontal cluster autoscaling that adds resources to the cluster infrastructure as necessary, along with vertical application autoscaling, which will add and remove instances of applications in response to demand. Autoscaling features will work with VM workloads but are primarily intended for use with containers.

Go to Original Article
Author:

SAP Ariba Discovery now open to all buyers and suppliers

To help mitigate massive disruptions to the global supply chain, SAP is making it easier for buyers and suppliers to connect by providing free access to SAP Ariba Discovery.

The move was announced in conjunction with SAP Ariba Live, an annual conference that was recast as a virtual conference Wednesday due to the ongoing coronavirus pandemic.

Chris HaydonChris Haydon

SAP Ariba Discovery is a service that enables buyers and suppliers to connect on the SAP Ariba Network, which currently includes 4 million suppliers. Buyers have always been able to join the network for free, but suppliers must pay fees after they have made connections with buyers on the network. The supplier fees are being waived until at least June 30, 2020, at which point the situation will be reevaluated and the free access may be extended, said Chris Haydon, president of the SAP procurement solutions area.

The move was made to help alleviate global supply chain disruption because of the coronavirus outbreak crisis, Haydon said.

As the COVID-19 outbreak unfolded, SAP saw SAP Ariba Discovery as a useful tool that enables buyers to find supply sources, regardless if they were an existing SAP Ariba customer, Haydon said.

SAP Discovery Server enables buyers and suppliers to connect on the Ariba Network.

“We wanted to remove the barriers by making it free to any supplier for the next 90 days,” he said. “In many ways, it’s a custom-built tool for this dynamic sourcing of demand, and given the times we’re in, we just thought it was the right thing to do.”

The move was a positive step in a time of crisis, said Predrag Jakovljevic, principal industry analyst with Technology Evaluation Centers, a Montreal-based enterprise technology analysis firm.

Given the times we’re in, we just thought it was the right thing to do.
Chris HaydonPresident of procurement solutions area, SAP

“Opening up the [SAP Ariba Discovery] to all suppliers and buyers without any fees charged by Ariba Network is a nice gesture to help companies navigate during these trying times of disruption,” Jakovljevic said. “If your usual suppliers are unable to help you today, there might be some in regions still not — or less — affected by coronavirus, who are also begging for some business.”

Simple, intelligent procurement UI

At the virtual SAP Ariba Live conference, which happened in the form of a series of video presentations, SAP demonstrated the integrated procurement environment that it calls “intelligent spend management.”

SAP Ariba, SAP Fieldglass and S/4HANA operational procurement are now integrated under a single UI that runs on top of the HANA database, and is connected through SAP Cloud Platform.

The idea is to provide a simpler, common user experience, but intelligent spend management goes further than that, Haydon said.

“Intelligent spend management is a forward-looking way to think about procurement. It’s about using technology that focuses its power on the tasks that can and should be automated or eliminated so you can focus on the aspects of business that can and should have human expertise,” he said. “We need to get beyond focusing only on creating simple screens and UIs and, instead, power procurement to succeed amongst today’s known disruptions and the unknown disruptions of tomorrow.”

This integration of procurement applications and embedding of intelligence could make SAP stronger in the areas of direct procurement and sourcing by integrating data from S/4HANA ERP such as bills of materials (BOMs), routing and manufacturing planning into the process, Jakovljevic said.

“One thing to watch about all that integration with S/4HANA is whether [SAP Ariba] is becoming serious about direct procurement and sourcing, which is much more complicated than buying the office staples,” he said. “Ariba has always been strong in the indirect materials space and perhaps is now getting serious about catching up with the likes of Jaggaer or SourceDay.”

Go to Original Article
Author:

Nvidia scoops up object storage startup SwiftStack

Nvidia plans to acquire object storage vendor SwiftStack to help its customers accelerate their artificial intelligence, high-performance computing and data analytics workloads.

The GPU vendor, based in Santa Clara, Calif., will not sell SwiftStack software but will use SwiftStack’s 1space as part of its internal artificial intelligence (AI) stack. It will also enable customers to use the SwiftStack software as part of their AI stacks, according to Nvidia’s head of enterprise computing, Manuvir Das.

SwiftStack and Nvidia disclosed the acquisition today. They did not reveal the purchase price, but they said it expects the deal to close with weeks.

Nvidia previously worked with SwiftStack

Nvidia worked with San Francisco-based SwiftStack for more than 18 months on tackling the data challenges associated with running AI applications at a massive scale. Nvidia found 1space particularly helpful. SwiftStack introduced 1space in 2018 to accelerate data access across public and private clouds through a single object namespace.

“Simply put, it’s a way of placing the right data in the right place at the right time, so that when the GPU is busy, the data can be sent to it quickly,” Das said.

Das said Nvidia customers would be able to use enterprise storage from any vendor. The SwiftStack 1space technology will form the “storage orchestration layer” that sits between the compute and the storage to properly place the data so the AI stack runs optimally, Das said.

“We are not a storage vendor. We do not intend to be a storage vendor. We’re not in the business of selling storage in any form,” Das said. “We work very closely with our storage partners. This acquisition is designed to further the integration between different storage technologies and the work we do for AI.”

We are not a storage vendor. We do not intend to be a storage vendor.
Manuvir DasHead of enterprise computing, Nvidia

Nvidia partners with storage vendors such as Pure Storage, NetApp, Dell EMC and IBM. The storage vendors integrate Nvidia GPUs into their arrays or sell the GPUs along with their storage in reference architectures.

Nvidia attracted to open source tech

Das said Nvidia found SwiftStack attractive because its software is based on open source technology. SwiftStack’s eponymous object- and file-based storage and data management software is rooted in open source OpenStack Swift. Das said Nvidia plans to continue to work with the SwiftStack team to advance and optimize the technology and make it available through open source avenues.

“The SwiftStack team is part of Nvidia now,” he said. “They’re super talented. So, the innovation will continue to happen, and all that innovation will be upstreamed into the open source SwiftStack. It will be available to anybody.”

Joe ArnoldJoe Arnold

SwiftStack laid off an undisclosed number of sales and marketing employees in late 2019, but kept the engineering and support team intact, according to president Joe Arnold. He attributed the layoffs to a shift in sales focus from classic backup and archiving to AI, machine learning and data analytics use cases.

The SwiftStack 7.0 software update that emerged late last year took aim at analytics HPC, AI and ML use cases, such as autonomous vehicle applications that feed data to GPU-based servers. SwiftStack said at the time that it had worked with customers to design clusters that could scale to handle multiple petabytes of data and support throughput in excess of 100 GB per second.

Das said Nvidia has been using SwiftStack’s object storage technology as well as 1space. He said Nvidia’s internal work on data science and AI applications had quickly showed the company that accelerating the computer shifts the bottleneck elsewhere, to the storage. That played a factor in Nvidia’s acquisition of SwiftStack, he noted.

“We recognized a long time ago that the way to help the customers is not just to provide them a GPU or a library, but to help them create the entire stack, all the way from the GPU up to the applications themselves. If you look at Nvidia now, we spend most of our energy on the software for different kinds of AI applications,” Das said.

He said Nvidia would fully support SwiftStack’s customer base. SwiftStack claims it has around 125 customers. It products lineup included SwiftStack’s object storage software, ProxyFS file system for integrated file and object API access, and 1space. SwiftStack’s software is designed to run on commodity hardware on premises, and its 1space technology can run in the public cloud.

SwiftStack spent more than eight years expanding its software’s capabilities since the company’s 2011 founding. Das said Nvidia has no reason to sell the SwiftStack’s proprietary software because it does not compete head-to-head with other object storage providers.

“Our philosophy here at Nvidia is we are not trying to compete with infrastructure vendors by selling some kind of a stack that competes with other peoples’ stacks,” Das said. “Our goal is simply to make people successful with AI. We think, if that happens, everybody wins, including Nvidia, because we believe GPUs are the best platform for AI.”

Go to Original Article
Author:

The Complete Guide to Scale-Out File Server for Hyper-V

This article will help you understand how to plan, configure and optimize your SOFS infrastructure, primarily focused on Hyper-V scenarios.

Over the past decade, it seems that an increasing number of components are recommended when building a highly-available Hyper-V infrastructure. I remember my first day as a program manager at Microsoft when I was tasked with building my first Windows Server 2008 Failover Cluster. All I had to do was connect the hardware, configure shared storage, and pass Cluster Validation, which was fairly straightforward.

Failover Cluster with Traditional Cluster Disks

Figure 1 – A Failover Cluster with Traditional Cluster Disks

Nowadays, the recommend cluster configuration for Hyper-V virtual machines (VMs) requires adding additional management layers such as Cluster Shared Volumes (CSV), disks which must also cluster a file server to host the file path to access it, known as a Scale-Out File Server (SOFS). While the SOFS provides the fairly basic functionality of keeping a file share online, understanding this configuration can be challenging for experienced Windows Server administrators. To see the complete stack which Microsoft recommends, scroll down to see the figures throughout this article. This may appear daunting, but do not worry, we’ll explain what all of these building blocks are for.

While there are management tools like System Center Virtual Machine Manager (SCVMM) that can automate the entire infrastructure deployment, most organizations need to configure these components independently. There is limited content online explaining how Scale-Out File Server clusters work and best practices for optimizing them. Let’s get into it!

Scale-Out File Server (SOFS) Capabilities & Limitations

A SOFS cluster should only be used for specific scenarios. The following list of features have been tested and are either supported, supported but not recommended, or not supported with the SOFS.

Supported SOFS scenarios

  • File Server
    • Deduplication – VDI Only
    • DFS Namespace (DFSN) – Folder Target Server Only
    • File System
    • SMB
      • Multichannel
      • Direct
      • Continuous Availability
      • Transparent Failover
  • Other Roles
    • Hyper-V
    • IIS Web Server
    • Remote Desktop (RDS) – User Profile Disks Only
    • SQL Server
  • System Center Virtual Machine Manager (VMM)

Supported, but not recommended SOFS scenarios

  • File Server
    • Folder Redirection
    • Home Directories
    • Offline Files
    • Roaming User Profiles

Unsupported SOFS scenarios

  • File Server
    • BranchCache
    • Deduplication – General Purpose
    • DFS Namespace (DFSN) – Root Server
    • DFS Replication (DFSR)
    • Dynamic Access Control (DAC)
    • File Server Resource Manager (FSRM)
    • File Classification Infrastructure (FCI)
    • Network File System (NFS)
    • Work Folders

Scale-Out File Server (SOFS) Benefits

Fundamentally, a Scale-Out File Server is a Failover Cluster running the File Server role. It keeps the file share path (\ClusterStorageVolume1) continually available so that it can always be accessed. This is critical because Hyper-V VMs us this file path to access their virtual hard disks (VHDs) via the SMB3 protocol. If this file path is unavailable, then the VMs cannot access their VHD and cannot operate.

Additionally, it also provides the following benefits:

  • Deploy Multiple VMs on a Single Disk – SOFS allows multiple VMs running on different nodes to use the same CSV disk to access their VHDs.
  • Active / Active File Connections – All cluster nodes will host the SMB namespace so that a VM can connect or quickly reconnect to any active server and have access to its CSV disk.
  • Automatic Load Balancing of SOFS Clients – Since multiple VMs may be using the same CSV disk, the cluster will automatically distribute the connections. Clients are able to connect to the disk through any cluster node, so they are sent to the server with fewest file share connections. By distributing the clients across different nodes, the network traffic and its processing overhead are spread out across the hardware which should maximize its performance and reduce bottlenecks.
  • Increased Storage Traffic Bandwidth – Using SOFS, the VMs will be spread across multiple nodes. This also means that the disk traffic will be distributed across multiple connections which maximizes the storage traffic throughput.
  • Anti-Affinity – If you are hosting similar roles on a cluster, such as two active/active file shares for a SOFS, these should be distributed across different hosts. Using the cluster’s anti-affinity property, these two roles will always try to run on different hosts eliminating a single point of failure.
  • CSV Cache – SOFS files which are frequently accessed will be copied locally on each cluster node in a cache. This is helpful if the same type of VM file is read many times, such as in VDI scenarios.
  • CSV CHKDSK – CSV disks have been optimized to skipping the offline phase, which means that they will come online faster after a crash. Faster recovery time is important for high-availability since it minimizes downtime.

Scale-Out File Server (SOFS) Cluster Architecture

This section will explain the design fundaments of Scale-Out File Servers for Hyper-V. The SOFS can run on the same cluster as the Hyper-V VMs it is supporting, or on an independent cluster. If you are running everything on a single cluster, the SOFS must be deployed as a File Server role directly on the cluster; it cannot run inside a clustered VM since that VM won’t start without access to the File Server. This would cause a problem since neither the VM nor the virtualized File Server could start-up since they have a dependency on each other.

Hyper-V Storage and Failover Clustering

When Hyper-V was first introduced with Windows Server 2008 Failover Clustering, it had several limitations that have since been addressed. The main challenge was that each VM required its own cluster disk, which made the management of cluster storage complicated. Large clusters could require dozens or hundreds of disks, one for each virtual machine. This was sometimes not even possible due to limitations created by hardware vendors which required a unique drive letter for each disk. Technically you could run multiple VMs on the same cluster disk, each with their own virtual hard disks (VHDs). However, this configuration was not recommended, because if one VM crashed and had to failover to a different node, it would force all the VMs using that disk to shut down and failover to other nodes. This causes unplanned downtime, and as virtualization becomes more popular, a cluster-aware file system was created known as Cluster Shared Volumes (CSV). See Figure 1 (above) for the basic architecture of a cluster using traditional cluster disks.

Cluster Shared Volume (CSV) Disks and Failover Clustering

CSV Disks were introduced in Windows Server 2008 R2 as a distributed file system that is optimized for Hyper-V VMs. The disk must be visible by all cluster nodes, use NTFS or ReFS, and can be created from pools of disks using Storage Spaces.

The CSV disk is designed to host VHDs from multiple VMs from different nodes and run them simultaneously. The VMs can distribute themselves across the cluster nodes, balancing the hardware resources which they are consuming. A cluster can host multiple CSV disks and their VMs can freely move around the cluster, without any planned downtime. The CSV disk traffic communicates over standard networks using SMB, so traffic can be routed across different cluster communication paths for additional resiliency, without being restricted to use a SAN.

A Cluster Shared Volumes disk functions similar to a file share hosting the VHD file since it provides storage and controls access. Virtual machines can access their VHDs like clients would access a file hosted in a file share using a path like \ClusterStorageVolume1. This file path is identical on every cluster node, so as a VM moves between servers it will always be able to access its disk using the same file path. Figure 2 shows a Failover Cluster storing its VHDs on a CSV disk. Note that multiple VHDs for different VMs on different nodes can reside on the same disk which they access through the SMB Share.

A Failover Cluster with a Cluster Shared Volumes (CSV) Disk

Figure 2 – A Failover Cluster with a Cluster Shared Volumes (CSV) Disk

Scale-Out File Server (SOFS) and Failover Clustering

The SMB file share used for the CSV disk must be hosted by a Windows Server File Server. However, the file share should also be highly-available so that it does not become a single point of failure. A clustered File Server can be deployed as a SOFS through Failover Cluster Manager as described at the end of this article.

The SOFS will publish the VHD’s file share location (known as the “CSV Namespace”) on every node. This active/active configuration allows clients to be able to access their storage through multiple pathways. This provides additional resiliency and availability because if one node crashes, the VM will temporarily pause its transactions until it can quickly reconnect to the disk via another active node, but it remains online.

Since the SOFS runs on a standard Windows Server Failover Cluster, it must follow the hardware guidance provided by Microsoft. One of the fundamental rules of failover clustering is that all the hardware and software should be identical. This allows a VM or file server to be able to operate the same way on any cluster node, as all the setting, file paths, and registry settings will be the same. Make sure you run the Cluster Validation tests and follow Altaro’s Cluster Validation troubleshooting guidance if you see any warnings or errors.

The following figure shows a SOFS deployed in the same cluster. The clustered SMB shares create a highly-available CSV namespace allowing VMs to access their disk through multiple file paths.

A Failover Cluster using Clustered SMB File Shares for CSV Disk Access

Figure 3 – A Failover Cluster using Clustered SMB File Shares for CSV Disk Access

Storage Spaces Direct (S2D) with SOFS

Storage Spaces Direct (S2D) lets organizations deploy small failover clusters with no shared storage. S2D will generally use commodity servers with direct-attached storage (DAS) to create clusters that use mirroring to replicate their data between local disks to keep their states consistent. These S2D clusters can be deployed as Hyper-V hosts, storage hosts or in a converged configuration running both roles. The storage uses Scale-Out File Servers to host the shares for the VHD files.

In Figure 4, a SOFS cluster is shown which uses storage spaces direct, rather than shared storage, to host the CSV volumes and VHD files. Each CSV volume and its respective VHDs are mirrored between each of the local storage arrays.

 A Failover Cluster with Storage Spaces Direct (S2D)

Figure 4 – A Failover Cluster with Storage Spaces Direct (S2D)

Infrastructure Scale-Out File Server (SOFS)

Windows Server 2019 introduced a new Scale-Out File Server role called the Infrastructure File Server. This functions as the traditional SOFS, but it is specifically designed to only support Hyper-V virtual infrastructure with no other types of roles. There can also be only one Infrastructure SOFS per cluster.

The Infrastructure SOFS can be created manually via PowerShell or automatically when it is deployed by Windows Azure Stack or System Center Virtual Machine Manager (SCVMM). This role will automatically create a CSV namespace share using the syntax \InfraSOFSNameVolume1. Additionally, it will enable the Continuous Availability (CA) setting for the SMB shares, also known as SMB Transparent Failover.

Infrastructure File Server Role on a Windows Server 2019 Failover Cluster

Figure 5 – Infrastructure File Server Role on a Windows Server 2019 Failover Cluster

Cluster Sets

Windows Server 2019 Failover Clustering introduced the management concept of cluster sets. A cluster set is a collection of failover cluster which can be managed as a single logical entity. It allows VMs to seamlessly move between clusters which then lets organizations create a highly-available infrastructure with almost limitless capacity. To simplify the management of the cluster sets, a single namespace can be used to access the cluster. This namespace can run on a SOFS for continual availability and clients will automatically get redirected to the appropriate location within the cluster set.

The following figure shows two Failover Clusters within a cluster set, both of which are using a SOFS. Additionally, a third independent SOFS is deployed to provide highly-available access to the cluster set itself.

A Scale-Out File Server with Cluster Sets

Figure 6 – A Scale-Out File Server with Cluster Sets

Guest Clustering with SOFS

Acquiring dedicated physical hardware is not required for the SOFS as this can be fully-virtualized. When a cluster runs inside of VMs instead of physical hardware, this is known as guest clustering. However, you should not run a SOFS within a VM which it is providing the namespace for, as it can get into a situation where it cannot start the VM since it cannot access the VM’s own VHD.

Microsoft Azure with SOFS

Microsoft Azure allows you to deploy virtualized guest clusters in the public cloud. You will need at least 2 storage accounts, each with a matching number and size of disks. It is recommended to use at least DS-series VMs with premium storage. Since this cluster is already running in Azure, it can also use a cloud witness for is quorum disk.

You can even download an Azure VM template which comes as a pre-configure two-node Windows Server 2016 Storage Spaces Direct (S2D) Scale-Out File Server (SOFS) cluster.

System Center Virtual Machine Manager (VMM) with SOFS

Since the Scale-Out File Server has become an important role in virtualized infrastructures, System Center Virtual Machine Manager (VMM) has tightly integrated it into their fabric management capabilities.

Deployment

VMM makes it fairly easy to deploy SOFS throughout your infrastructure on bare-metal or Hyper-V hosts. You can add existing file servers under management or deploy each SOFS throughout your fabric. For more information visit:

When VMM is used to create a cluster set, an Infrastructure SOFS is automatically created on the Management Server (if it does not already exist). This file share will host the single shared namespace used by the cluster set.

Configuration

Many of the foundational components of a Scale-Out File Server can be deployed and managed by VMM. This includes the ability to use physical disks to create storage pools that can host SOFS file shares. The SOFS file shares themselves can also be created through VMM. If you are also using Storage Spaces Direct (S2D) then you will need to create a disk witness which will use the SOFS to host the file share. Quality of Service (QoS) can also be adjusted to control network traffic speed to resources or VHDs running on the SOFS shares.

Management Cluster

In large virtualized environments, it is recommended to have a dedicated management cluster for System Center VMM. The virtualization management console, database, and services are highly-available so that they can continually monitor the environment. The management cluster can use unified storage namespace runs on a Scale-Out File Server, granting additional resiliency to accessing the storage and its clients.

Library Share

VMM uses a library to store files which may be deployed multiple times, such as VHDs or image files. The library uses an SMB file share as a common namespace to access those resources, which can be made highly-available using a SOFS. The data in the library itself cannot be stored on a SOFS, but rather on a traditional clustered file server.

Update Management

Cluster patch management is one of the most tedious tasks which administrators face as it is repetitive and time-consuming. VMM has automated this process through serially updating one node at a time while keeping the other workloads online. SOFS clusters can be automatically patched using VMM.

Rolling Upgrade

Rolling upgrades refers to the process where infrastructure servers are gradually updated to the latest version of Windows Server. Most of the infrastructure servers managed by VMM can be included in the rolling upgrade cycle which functions like the Update Management feature. Different nodes in the SOFS cluster are sequentially placed into maintenance mode (so the workloads are drained), updated, patched, tested and reconnected to the cluster. Workloads will gradually migrate to the newly installed nodes while the older nodes wait to be updated. Gradually all the SOFS cluster nodes are updated to the latest version of Windows Server.

Internet Information Services (IIS) Web Server with SOFS

Everything in this article so far has referenced SOFS in the context of being used for Hyper-V VMs. SOFS is gradually being adopted by other infrastructure services to provide high-availability to their critical components which use SMB file shares.

The Internet Information Services (IIS) Web Server is used for hosting websites. To distribute the network traffic, usually, multiple IIS Servers are deployed. If they have any shared configuration information or data, this can be stored in the Scale-Out File Server.

Remote Desktop Services (RDS) with SOFS

The Remote Desktop Services (RDS) role has a popular feature known as user profile disks (UPDs) which allows users to have a dedicated data disk stored on a file server. The file share path can be placed on a SOFS to make access to that share highly-available.

SQL Server with SOFS

Certain SQL Server roles have been able to use SOFS to make their SMB connections highly-available. Starting with SQL Server 2012, the SMB file server storage option is offered for SQL Server, databases (including Master, MSDB, Model and TempDB) and the database engine. The SQL Server itself can be standalone or deployed as a failover cluster installation (FCI).

Deploying a SOFS Cluster & Next Steps

Now that you understand the planning considerations, you are ready to deploy the SOFS. From Failover Cluster Manager, you will launch the High Availability Wizard and select the File Server role. Next, you will select the File Server Type. Traditional clustered file servers will use the File Server for general use. For SOFS, select Scale-Out File Server for application data.

The interface is shown in the following figure and described as, “Use this option to provide storage for server applications or virtual machines that leave files open for extended periods of time. Scale-Out File Server client connections are distributed across nodes in the cluster for better throughput. This option supports the SMB protocol. It does not support the NFS protocol, Data Deduplication, DSF Replication, or File Server Resource Manager.”

Installing a Scale-Out File Server (SOFS)

Figure 7 – Installing a Scale-Out File Server (SOFS)

Now you should have a fundamental understanding of the use and deployment options for the SOFS. For additional information about deploying a Scale-Out File Server (SOFS), please visit https://docs.microsoft.com/en-us/windows-server/failover-clustering/sofs-overview. If there’s anything you want to ask about SOFS, let me know in the comments below and I’ll get back to you!

Go to Original Article
Author: Symon Perriman

GumGum uses machine learning annotation service Figure Eight

GumGum developed computer vision and NLP technology to help clients better advertise to their users.

The Santa Monica, Calif.-based vendor, founded in 2008, automatically scans video, audio, images and text on webpages, identifying and extracting key elements. It then uses that data to help advertisers place relevant ads on the webpages.

To power its machine learning and computer vision technology, GumGum needs a lot of training data. To meet its data needs, about two years ago the company turned to Figure Eight, a crowdsourcing machine learning annotation vendor.

Acquired by Appen, another crowdsourcing machine learning annotation company, in April 2019, Figure Eight provides training data to a variety of similar vendors. Figure Eight relies on a network of contributors to annotate huge amounts of data.

The contributors are trained, although they are mostly not data scientists, and are screened for security purposes. Their large contributor network enables Figure Eight to train data at scale, as well as continue to review annotated data while a job is running.

Getting training data

Before using Figure Eight, GumGum employed full-time staff for machine learning annotation, said Erica Nishimura, data curator  at GumGum. That worked, but it was costly and, at times, slow. With large amounts of data, it could take months to get useable training data. Besides, the staff could only work in English, but GumGum has clients internationally.

Figure Eight, machine learning annotation
Figure Eight uses a contributor network to provide training data for companies like GumGum

Figure Eight, meanwhile, works in a number of languages. At the time, Nishimura said, it was one of the only companies that worked in Japanese. As GumGum has a thriving Japanese division, the language support was one of the main reasons it chose Figure Eight.

Scalability, said Lane Schechter, product manager at GumGum, was the other reason GumGum chose Figure Eight.

Working with Figure Eight has increased GumGum’s data capacity tenfold, Schechter said. Also, instead of taking months to get completed machine learning annotation, it now happens in about a week.

Problems

Still, that’s not to say that working with Figure Eight has been without its share of problems.

One of the biggest challenges has been communicating directly with Figure Eight’s crowdsource contributors, Nishimura said.

At times, the contributors have had trouble understanding exactly what GumGum wants, but, because there is no way to directly interact with the contributors, Nishimura said it is hard to know if the contributors are having problems, or what they might be.

The best GumGum can do is put in a message, Nishimura said, but there is no way to alert each contributor to the message. Besides, a single message isn’t the same as having a conversation, she added.

While she was unsure if other similar crowdsourcing machine learning annotation companies have a better way to communicate with contributors, Nishimura said some other companies have their own checkers, who do spot-checks on completed annotations.

“It’s one more step to ensure quality,” Nishimura said. But, she added, the prices of those services are generally higher than those of Figure Eight’s.

Go to Original Article
Author:

Citrix’s performance analytics service gets granular

Citrix introduced an analytics service to help IT professionals better identify the cause of slow application performance within its Virtual Apps and Desktops platform.

The company announced the general availability of the service, called Citrix Analytics for Performance, at its Citrix Summit, an event for the company’s business partners, in Orlando on Monday. The service carries an additional cost.

Steve Wilson, the company’s vice president of product for workspace ecosystem and analytics, said many IT admins must deal with performance problems as part of the nature of distributed applications. When they receive a call from workers complaining about performance, he said, it’s hard to determine the root cause — be it a capacity issue, a network problem or an issue with the employee’s device.

Performance, he said, is a frequent pain point for employees, especially remote and international workers.

“There are huge challenges that, from a performance perspective, are really hard to understand,” he said, adding that the tools available to IT professionals have not been ideal in identifying issues. “It’s all been very technical, very down in the weeds … it’s been hard to understand what [users] are seeing and how to make that actionable.”

Part of the problem, according to Wilson, is that traditional performance-measuring tools focus on server infrastructure. Keeping track of such metrics is important, he said, but they do not tell the whole story.

“Often, what [IT professionals] got was the aggregate view; it wasn’t personalized,” he said.

When the aggregate performance of the IT infrastructure is “good,” Wilson said, that could mean that half an organization’s users are seeing good performance, a quarter are seeing great performance, but a quarter are experiencing poor performance.

Steve Wilson, vice president of product for workspace ecosystem and analytics, CitrixSteve Wilson

With its performance analytics service, Citrix is offering a more granular picture of performance by providing metrics on individual employees, beyond those of the company as a whole. That measurement, which Citrix calls a user experience or UX score, evaluates such factors as an employee’s machine performance, user logon time, network latency and network stability.

“With this tool, as a system administrator, you can come in and see the entire population,” Wilson said. “It starts with the top-level experience score, but you can very quickly break that down [to personal performance].”

Wilson said IT admins who had tested the product said this information helped them address performance issues more expeditiously.

“The feedback we’ve gotten is that they’ve been able to very quickly get to root causes,” he said. “They’ve been able to drill down in a way that’s easy to understand.”

A proactive approach

Eric Klein, analyst, VDC Research GroupEric Klein

Eric Klein, analyst at VDC Research Group Inc., said the service represents a more proactive approach to performance problems, as opposed to identifying issues through remote access of an employee’s computer.

“If something starts to degrade from a performance perspective — like an app not behaving or slowing down — you can identify problems before users become frustrated,” he said.

Mark Bowker, senior analyst, Enterprise Strategy GroupMark Bowker

Klein said IT admins would likely welcome any tool that, like this one, could “give time back” to them.

“IT is always being asked to do more with less, though budgets have slowly been growing over the past few years,” he said. “[Administrators] are always looking for tools that will not only automate processes but save time.”

Enterprise Strategy Group senior analyst Mark Bowker said in a press release from Citrix announcing the news that companies must examine user experience to ensure they provide employees with secure and consistent access to needed applications.

IT is always being asked to do more with less.
Eric KleinAnalyst, VDC Research Group

“Key to providing this seamless experience is having continuous visibility into network systems and applications to quickly spot and mitigate issues before they affect productivity,” he said in the release.

Wilson said the performance analytics service was the product of Citrix’s push to the cloud during the past few years. One of the early benefits of that process, he said, has been in the analytics field; the company has been able to apply machine learning to the data it has garnered and derive insights from it.

“We do see a broad opportunity around analytics,” he said. “That’s something you’ll see more and more of from us.”

Go to Original Article
Author:

January Patch Tuesday fixes cryptography bug found by NSA

Microsoft closed a flaw in a key cryptographic feature it discovered with help from the National Security Agency as part of the January Patch Tuesday security updates.

Microsoft issued fixes for Windows, Internet Explorer, Office, several .NET technologies, OneDrive for Android and Microsoft Dynamics for January Patch Tuesday to close 49 unique vulnerabilities, with eight rated as critical. Microsoft said there were no exploited or publicly disclosed vulnerabilities. This month’s updates were the last free security fixes for Windows 7 and Windows Server 2008/2008 R2 as those operating systems left extended support.

Windows cryptographic library flaw fixed

The bug that drew the most attention from various security researchers on January Patch Tuesday is a spoofing vulnerability (CVE-2020-0601), rated important, that affects Windows 10 and Windows Server 2016 and 2019 systems. The NSA uncovered a flaw in the crypt32.dll file that handles certificate and cryptographic messaging functions in the Windows CryptoAPI. The bug would allow an attacker to produce a malicious program that appears to have an authenticated signature from a trusted source.

A successful exploit using a spoofed certificate could be used to launch several types of attacks, such as deliver a malicious file that appears trustworthy, perform man-in-the-middle campaigns and decode sensitive data. An unpatched system could be particularly susceptible because the malicious file could appear legitimate and even skirt Microsoft’s AppLocker protection.

“The guidance from us would be, regardless of Microsoft’s ‘important’ classification, to treat this as a priority one and get the patch pushed out,” said Chris Goettl, director of product management and security at Ivanti, a security and IT management vendor based in South Jordan, Utah.

Goettl noted that companies might not be directly attacked with exploits that use the CryptoAPI bug, but could be at risk from attacks on the back-end system of a vendor or another outside entity, such as when attackers embedded the NotPetya ransomware in tax software to slip past defenses.

Chris Goettl, director of product management and security, IvantiChris Goettl

“It’s not a very common occurrence because good code-signing certificates can establish a level of trust, while this [vulnerability] invalidates that trust and allows an attacker to try and spoof that. It introduces a lot of potential for risk, so we recommend people close [CVE-2020-0601] down as quickly as possible,” he said.

Bugs in Windows remote connection technology patched

January Patch Tuesday also closed three vulnerabilities related to Remote Desktop Services rated critical.

CVE-2020-0609 and CVE-2020-0610 are both remote code execution vulnerabilities in the Remote Desktop Gateway that affect server operating systems on Windows Server 2012 and newer. Microsoft said both CVEs can be exploited pre-authentication without any interaction from the user. Attackers who use the exploit can run arbitrary code on the target system, then perform other tasks, including install programs, delete data or add a new account with full user rights.

CVE-2020-0611 is a remote code execution vulnerability in the Remote Desktop Client that affects Windows 7 and newer on desktops, and Windows Server 2008 R2 and newer on server systems, when the attacker tricks a user to connect to a malicious server. The threat actor could then perform a range of actions, such as install programs, view or change data, or make a new account with full user rights.   

Legacy operating systems reach end-of-life

January Patch Tuesday marks the last time Microsoft will provide security updates and other fixes for the Windows 7, Windows Server 2008 and 2008 R2 operating systems unless customers pay to enter the Extended Security Updates (ESU) program. Companies must also have Software Assurance coverage or subscription licenses to purchase ESU keys for the server operating systems. Users will need to add the ESU key to systems they want to keep protected. ESU for those systems will end in three years.

Companies that plan to keep these legacy operating systems and have signed up for the ESU program should install the servicing stack updates Microsoft released for all three operating systems on January Patch Tuesday, Goettl said. Administrators also need to deploy and activate the ESU key using Microsoft’s instructions.

ESU is an expensive option. For on-premises server workloads, organizations will need either Software Assurance or a subscription license at a cost of about 75% of the license cost each year.  

ESU does not add new or updated features, just security fixes.

For organizations that plan to keep these operating systems running without the safety net of ESU, there are a few ways to minimize the risk around those workloads, including adding more security layers and removing the workload from a direct connection to the internet, Goettl said.

“If there’s an application or something that needs to run on Windows 7, then virtualize that environment. Get the users on the Windows 10 platform and have them connect into the Windows 7 environment to access that critical app. You will it spend more money doing it that way, but you will reduce your risk significantly,” he said.

Go to Original Article
Author:

Developing a quantum-ready global workforce – Microsoft Quantum

At Microsoft Quantum, our ambition is to help solve some of the world’s most complex challenges through the world’s most scalable quantum system. Recently, we introduced Azure Quantum to unite a diverse and growing quantum community and accelerate the impact of this technology. Whether it’s algorithmic innovation that improves healthcare outcomes or breakthroughs in cryogenic engineering that enable more sustainable systems design, these recent advancements across the stack are bringing the promise of quantum to our world, right now.

In December 2018, the United States Congress signed the National Quantum Initiative Act – an important milestone for investing the resources needed to continue advancing the field. As recognized by the Act, education on quantum information science and engineering needs to be an area of explicit focus, as the shortage of quantum computing talent worldwide poses a significant challenge to accelerating innovation and fully realizing the impact quantum can have on our world.

Leaders across both public and private sectors need to continue working together to develop a global workforce of quantum engineers, researchers, computer and materials scientists, and other industry experts who will be able to carry quantum computing into the future. Microsoft has been collaborating with academic institutions and industrial entities around the world to grow this quantum generation and prepare the workforce for this next technological revolution.

Empowering the quantum generation through education

Earlier this year, Microsoft partnered with the University of Washington to teach an introductory course on quantum computing and programming. The course, led by Dr. Krysta Svore, General Manager of Microsoft Quantum Systems, focused on the practical implementation of quantum algorithms.

Students were first introduced to quantum programming with Q# through a series of coding exercises followed by programming assignments. For their final project, student teams developed quantum solutions for specified problems – everything from entanglement games and key distribution protocols to quantum chemistry and a Bitcoin mining algorithm. Several students from this undergraduate course joined the Microsoft Quantum team for a summer internship, further developing their new skillsets and delivering quantum impact to organizations around the world.

On the heels of this hands-on teaching engagement, Microsoft has established curriculum partnerships with more than 10 institutions around the world to continue closing the skills gap in quantum development and quantum algorithm design. This curriculum is circling the globe, from the University of California, Los Angeles (UCLA) to the Indian Institute of Technology (IIT) in Roorkee and Hyderabad, India.

Partner universities leverage Q#, Microsoft’s quantum programming language and associated Quantum Development Kit, to teach the principles of quantum computing to the next generation of computer engineers and scientists.

“The course material extended to us by Microsoft is concise and challenging. It covers the necessary mathematical foundations of Quantum Computing. Simulation on Q# is quite straightforward and easy to interpret. Collaboration with Microsoft has indeed captivated students of IIT Roorkee to get deeper insights into Quantum Technology.”

Professor Ajay Wasan of IIT Roorkee, Department of Physics

Q# integrates with familiar tools like Visual Studio and Python, making it a very approachable entry point for undergraduate and graduate students alike.

 “I integrated Microsoft’s Q# into my UCLA graduate course called Quantum Programming.  My students found many aspects of Q# easy to learn and used the language to program and run four quantum algorithms. Thus, the curriculum partnership with Microsoft [has] helped me teach quantum computing to computer science students successfully.”

– Professor Jens Palsberg of UCLA, Computer Science Department

Microsoft has also partnered with Brilliant to bring quantum computing to students and professionals around the world via a self-serve e-learning environment.

a GIF of Microsoft's Brilliant quantum curriculum

a GIF of Microsoft

This interactive Quantum Computing course introduces students to quantum principles and uses Q# to help people learn to build quantum algorithms, simulating a quantum environment in their browsers. In the last six months, more than 40,000 people have interacted with the course and started building their own quantum solutions.

Accelerating quantum innovation through cross-industry collaboration

Recently, Microsoft enrolled into the Quantum Economic Development Consortium (QED-C), which aims to enable and grow the United States quantum industry.

QED-C was established with support from the National Institute of Standards and Technology (NIST) as part of the federal strategy for advancing quantum information science. Through the QED-C, Microsoft partners with a diverse set of business and academic leaders to identify and address gaps in technology, standards, and workforce readiness facing the quantum industry.

We look forward to continuing our academic and cross-industry collaborations in developing a quantum workforce to tackle real-world scenarios and bring this revolutionary technology to fruition.

Request to be an early adopter of Azure Quantum and incorporate Q# and the QDK in your quantum curriculum.

Are you currently a student interested in joining Microsoft Quantum as an intern? Apply to our open research intern positions today!

Other ways to get involved:

Learning resources:

Go to Original Article
Author: Microsoft News Center