Tag Archives: Stack

Threat Stack Application Security Monitoring adds Python support

Threat Stack has announced Python support for its Threat Stack Application Security Monitoring product. The update comes with no additional cost as part of the Threat Stack Cloud Security Platform.

With Python support for Application Security Monitoring, Threat Stack customers who use Python with Django and Flask frameworks can ensure security in the software development lifecycle with risk identification of both third-party and native code, according to Tim Buntel, vice president of application security products at Threat Stack.

In addition, the platform also provides built-in capabilities to help developers learn secure coding practices and real-time attack blocking, according to the company.

“Today’s cloud-native applications are comprised of disparate components, including containers, virtual machines and scripts, including those written in Python, that serve as the connective tissue between these elements,” said Doug Cahill, senior analyst and group Practice Director, Cybersecurity at Enterprise Strategy Group. Hence, the lack of support for any one layer of a stack means a lack of visibility and a vulnerability an attacker could exploit.

Application Security Monitoring is a recent addition to Threat Stack Cloud Security Platform. Introduced last June, the platform is aimed at bringing visibility and protection to cloud-based architecture and applications. Threat Stack Cloud Security Platform touts the ability to identify and block attacks such as cross-site scripting (XSS) and SQL injection by putting the application in context with the rest of the stack. It also allows users to move from the application to the container or the host, where it is deployed with one click when an attack happens, according to the company.

“[Application Security Monitoring] … provides customers with full stack security observability by correlating security telemetry from the cloud management console, host, containers and applications in a single, unified platform,” Buntel said.

To achieve full stack security and insights from the cloud management console, host, containers, orchestration and applications, customers can combine Threat Stack Application Security Monitoring with the rest of the Threat Stack Cloud Security Platform, according to the company.

Cahill said customers should look for coverage of the technology stack as well as the lifecycle when looking to secure cloud-native applications, because such full stack and lifecycle support allows for threat detection and prevention capabilities “from the code level down to the virtual machine or container to be implemented in both pre-deployment stages and runtime.”

“Cloud security platforms, which integrate runtime application self-protection functionality with cloud workload protection platforms to provide full-stack and full lifecycle visibility and control, are just now being offered by a handful of cybersecurity vendors, including Threat Stack,” he added.

Threat Stack Application Security Monitoring for Python is available as of Wednesday.

Threat Stack competitors include CloudPassage, Dome9 and Sophos. CloudPassage Halo is a security automation platform delivering visibility, protection and compliance monitoring for cybersecurity risks; the platform also covers risks in Amazon Web Services and Azure deployments, according to the company. CloudGuard Dome9 is a software platform for public cloud security and compliance orchestration; the platform helps customers assess their security posture, detect misconfigurations and enforce security best practices to prevent data loss, according to the company. Sophos Intercept X enables organizations to detect blended threats that merge automation and human hacking skills, according to the company.

Go to Original Article
Author:

Do you know the difference in the Microsoft HCI programs?

While similar in name, Microsoft’s Azure Stack and Azure Stack HCI products are substantially different product offerings designed for different use cases.

Azure Stack brings Azure cloud capabilities into the data center for organizations that want to build and run cloud applications on localized resources. Azure Stack HCI operates on the same Hyper-V-based, software-driven compute, storage and networking technologies but serves a fundamentally different purpose. This new Microsoft HCI offering is a hyper-converged infrastructure product that combines vendor-specific hardware with Windows Server 2019 Datacenter edition and management tools to provide a highly integrated and optimized computing platform for local VM workloads.

Azure Stack gives users a way to employ Azure VMs for Windows and Linux, Azure IoT and Event Hubs, Azure Marketplace, Docker containers, Azure Key Vault, Azure Resource Manager, Azure Web Apps and Functions, and Azure administrative tools locally. This functionality gives an organization the benefits of Azure cloud operation, while also satisfying regulatory requirements that require workloads to run in the data center.

Azure Stack HCI offers optional connections to an array of Azure cloud services, including Azure Site Recovery, Azure Monitor, Azure Backup, Azure Update Management, Azure File Sync and Azure Network Adapter. However, these workloads remain in the Azure cloud. Also, there is no way to convert this Microsoft HCI product into an Azure Stack deployment.

Azure Stack HCI evolved from Microsoft’s WSSD HCI offering.

Windows Server Software-Defined products still exist

Azure Stack HCI evolved from Microsoft’s Windows Server Software-Defined (WSSD) HCI offering. The WSSD program still exists, but the main difference on the software side is hardware in the WSSD program runs on the Windows Server 2016 OS.

WSSD HCI is similar to Azure Stack HCI with a foundation of vendor-specific hardware, the inclusion of Windows Server technologies — Hyper-V, Storage Spaces Direct and software-defined networking — and Windows Admin Center for systems management. Azure Stack HCI expands on WSSD through improvements to Windows Server 2019 and tighter integration with Azure services.

Go to Original Article
Author:

Microsoft challenges Amazon with Dynamics 365 Commerce

Microsoft filled a major gap in its customer experience stack with the Dynamics 365 Commerce online sales platform, giving customers that own physical stores more technology to drive bottom-line revenues. The e-commerce platform is joined by another new app, the Dynamics 365 Connected Store, which combines data collected online with data collected at brick-and-mortar stores.

The idea is not only to enable online sales for traditional retailers, but to also help customers continue their online shopping experiences when they set foot inside a store location, said Alysa Taylor, corporate vice president for business applications and global industry at Microsoft, in a blog post.

Together with other new AI features and data tools added to existing Dynamics 365 applications, Microsoft is giving retailers a strong alternative to Amazon’s platform — but more importantly, it’s challenging integrated CX stacks from Salesforce and Oracle, said Forrester analyst Kate Leggett.

“You can’t support the customer through their end-to-end journey without an e-commerce pillar,” said Leggett, who added that Dynamics 365 Commerce might not be a great leap forward as an e-commerce platform, but it catches Microsoft up to the pack. “It was a real hole in Microsoft’s portfolio.”

Microsoft is focusing its e-commerce platform for B2C retailers for now, Leggett said. Technology vendors sometimes have separate e-commerce platforms for B2B and B2C customers, but Microsoft said it plans to build the B2C side and add B2B-centric features later.

Dynamics 365 Connected Store adds data insights

Dynamics 365 Commerce paired with Connected Store creates a platform for AI and machine learning for behavioral data analysis that can trace customer journeys from online research to their movements through a physical store as they shop. Moreover, Dynamics 365 Connected Store helps store employees personalize their interactions with individual customers by showing them, for example, what the customer was looking at online before they came in.

You can’t support the customer through their end-to-end journey without an e-commerce pillar.
Kate LeggettAnalyst, Forrester Research

Connected Store’s data tools can help optimize store operations on a day-to-day basis by, for example, summoning clerks via phone notifications to help check out customers during busy times. It also analyzes video and inventory data to report on longer-term buying patterns to promote inventory and merchandising efficiencies within a store or region.

“It’s about real-time insights, connected data and analytics — having that data available to deliver outcomes you need,” Leggett said.

Also previewed by Microsoft were related new features for existing applications, including Dynamics 365 Customer Insights, which aggregates IoT data from goods such as connected kitchen appliances that contain sensors transmitting data back to the manufacturer. Another was a set of tools within Dynamics 365 Virtual Agent for Customer Service to make Microsoft chatbots easier to customize and deploy.

Dynamics 365 Connected Store currently is in private preview, while Dynamics 365 Commerce is in public preview. A Microsoft spokesperson said the general availability date would be revealed in the “coming months,” as well as pricing information.

Go to Original Article
Author:

Oracle andVMware forge new IaaS cloud partnership

SAN FRANCISCO — VMware’s virtualization stack will be made available on Oracle’s IaaS, in a partnership that underscores changing currents in the public cloud market and represents a sharp strategic shift for Oracle.

Under the pact, enterprises will be able to deploy certified VMware software on Oracle Cloud Infrastructure (OCI), the company’s second-generation IaaS. Oracle is now a member of the VMware Cloud Provider Program and will sell VMware’s Cloud Foundation stack for software-defined data centers, the companies said on the opening day of Oracle’s OpenWorld conference.

Oracle plans to give customers full root access to physical servers on OCI, and they can use VMware’s vCenter product to manage on-premises and OCI-based environments through a single tool.

“The VMware you’re running on-premises, you can lift and shift it to the Oracle Cloud,” executive chairman and CTO Larry Ellison said during a keynote. “You really control version management operations, upgrade time of the VMware stack, making it easy for you to migrate — if that’s what you want to do — into the cloud with virtually no change.”

The companies have also reached a mutual agreement around support, which Oracle characterized with the following statement: “[C]ustomers will have access to Oracle technical support for Oracle products running on VMware environments. … Oracle has agreed to support joint customers with active support contracts running supported versions of Oracle products in Oracle supported computing environments.”

It’s worth noting the careful language of that statement, given Oracle and VMware’s history. While Oracle has become more open to supporting its products on VMware environments, it has yet to certify any for VMware.

Moreover, many customers have found Oracle’s licensing policy for deploying its products on VMware devilishly complex. In fact, a cottage industry has emerged around advisory services meant to help customers keep compliant with Oracle and VMware.

Nothing has changed with regard to Oracle’s existing processor license policy, said Vinay Kumar, vice president of product management for OCI. But the VMware software to be made available on OCI will be through bundled, Oracle-sold SKUs that encompass software and physical infrastructure. Initially, one SKU based on X7 bare-metal instances will be available, according to Kumar.

Oracle and VMware have been working on the partnership for the past nine months, he added. The first SKU is expected to be available within the next six months. Kumar declined to provide details on pricing.

Oracle, VMware relations warm in cloudier days

“It seems like there is a thaw between Oracle and VMware,” said Gary Chen, an analyst at IDC. The companies have a huge overlap in terms of customers who use their software in tandem, and want more deployment options, he added. “Oracle customers are stuck on Oracle,” he said. “They have to make Oracle work in the cloud.”

Gary Chen, Analyst, IDCGary Chen

Meanwhile, VMware has already struck cloud-related partnerships with AWS, IBM, Microsoft and Google, leaving Oracle little choice but to follow. Oracle has also largely ceded the general-purpose IaaS market to those competitors, and has positioned OCI for more specialized tasks as well as core enterprise application workloads, which often run on VMware today.

Massive amounts of on-premises enterprise workloads run on VMware, but as companies look to port them to the cloud, they want to do it in the fastest, easiest way possible, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif.

The biggest cost of lift-and-shift deployments to the cloud involves revalidation and testing in the new environment, Mueller added.

It seems like there is a thaw between Oracle and VMware.
Gary ChenAnalyst, IDC

But at this point, many enterprises have automated test scripts in place, or even feel comfortable not retesting VMware workloads, according to Mueller. “So the leap of faith involved with deploying a VMware VM on a server in the corporate data center or in a public cloud IaaS is the same,” he said.

In the near term, most customers of the new VMware-OCI service will move Oracle database workloads over, but it will be Oracle’s job to convince them OCI is a good fit for other VMware workloads, Mueller added.

Go to Original Article
Author:

Storage Spaces Direct Hardware Requirements and Azure Stack HCI

This article will run down the hardware requirements for Storage Spaces Direct. Since Storage Spaces found its way into Windows Server with Windows Server 2012, much has changed for Microsoft Strategies regarding supported hardware.

Read More About Storage Spaces Direct

What is Storage Spaces Direct?

S2D Technologies in Focus

3 Important Things You Should Know About Storage Spaces Direct

In the first attempt, Microsoft gave customers a wide range of options to design the hardware part of the solution themselves. While this enabled customers to build a Storage Spaces Cluster out of scrap or desktop equipment, to be honest, we ended up with many non-functional clusters during that period.

After that phase and with the release of Windows Server 2016, Microsoft Decided to only support validated system configurations from ODMs or OEMs and no longer support self-built systems. For good reason!

Storage Spaces Direct Hardware Requirements

Let’s get into the specific hardware requirements

First off, every driver, device or component used for Storage Spaces Direct needs to be “Software-Defined Datacenter” compatible and also be supported for Windows Server 2016 by Microsoft.

Storage Spaces Direct Compatibility

Source: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-hardware-requirements

Servers

  • Minimum of 2 servers, maximum of 16 servers
  • Recommended that all servers be the same manufacturer and model

CPU

  • Intel Nehalem or later compatible processor; or
  • AMD EPYC or later compatible processor

Memory

  • Memory for Windows Server, VMs, and other apps or workloads; plus
  • 4 GB of RAM per terabyte (TB) of cache drive capacity on each server, for Storage Spaces Direct metadata

Boot

  • Any boot device supported by Windows Server, which now includes SATADOM
  • RAID 1 mirror is not required, but is supported for boot
  • Recommended: 200 GB minimum size

Networking

Minimum (for small scale 2-3 node)

  • 10 Gbps network interface
  • Direct-connect (switchless) is supported with 2-nodes

Recommended (for high performance, at scale, or deployments of 4+ nodes)

  • NICs that are remote-direct memory access (RDMA) capable, iWARP (recommended) or RoCE
  • Two or more NICs for redundancy and performance
  • 25 Gbps network interface or higher

Drives

Storage Spaces Direct works with direct-attached SATA, SAS, or NVMe drives that are physically attached to just one server each. For more help choosing drives, see the Choosing drives topic.

Source: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-hardware-requirements

You can find more hardware information at Microsoft Docs. You can also get more details about supported disk configurations etc. at that site.

Azure Stack HCI (Hyper-Converged Infrastructure)

To make it easier for the customer to choose when it comes to vendors and systems, Microsoft now combines supported Storage Spaces and Hyperconverged Systems under the label Azure Stack HCI. Despite the name Azure in the label, you will not need to buy a full-blown Azure Stack deployment in order to have Storage Spaces. However, with Windows Server 2019, Microsoft has made it much easier to find supported appliances for Storage Spaces and Hyperconverged deployments with that label.

When following Microsoft’s guidance, Azure Stack HCI is the starting point for the next generation Software-Defined Datacenter, where Azure (the public cloud) is at the end of the road.

Azure Stack HCI

Source: https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-stack-hybrid-cloud-your-way-datasheet/Azure%20Stack%20hybrid%20cloud%20your%20way.pdf

To make it easier to find the right vendor and system for you, Microsoft has published the Azure Stack HCI catalog.

Here you can filter effectively and search for your organization’s requirements. This tool makes it super easy to see which vendor can offer hardware you may be targeting.

For example, I was looking for a system with the following requirements for a customer branch office:

  • Regional available in Europe
  • 2-Node Optimized
  • RDMA RoCE

As you can see in the screenshot, I got a result of 14 possible systems. Now I can contact the vendors for additional information and sizing.

Azure Stack HCI Catalog

When it comes to sizing, you should work together with your hardware vendor to get the best possible configuration. Disk sizes, types, etc. are still different from vendor to vendor and system to system.

To help you to validate the configurations or have some kind of the first idea what you need, Microsoft published a guide in their documentation named Planning volumes in Storage Spaces Direct. Additionally, one of the Program Managers for S2D, Cosmos Darwin, has published a small calculator as well.

Wrap Up

This blog post will give you a better idea of what kind of hardware you’ll need to get your hands on if you want to use S2D. This post is a critical part of putting together a successful S2D deployment. In the next part of this series on Storage Spaces Direct, we will focus more on architecture with S2D and also on the competitors on the market.

Thanks for reading!

Go to Original Article
Author: Florian Klaffenbach

CIOs need an AI infrastructure, but it won’t come easy

CIOs are starting to rethink the infrastructure stack required to support artificial intelligence technologies, according to experts at the Deep Learning Summit in San Francisco. In the past, enterprise architectures coalesced around efficient technology stacks for business processes supported by mainframes, then by minicomputers, client servers, the internet and now cloud computing. But every level of infrastructure is now up for grabs in the rush to take advantage of AI.

“There were well-defined winners that became the default stack around questions like how to run Oracle and what PDP was used for,” said Ashmeet Sidana, founder and managing partner of Engineering Capital, referring to the Programmed Data Processor, an older model of minicomputer

“Now, for the first time, we are seeing that every layer of that stack is up for grabs, from the CPU and GPU all the way up to which frameworks should be used and where to get data from,” said Sidana, who serves as chief engineer of the venture capital firm, based in Menlo Park, Calif.

The stakes are high for building an AI infrastructure — startups, as well as legacy enterprises, could achieve huge advantages by innovating at every level of this emerging stack for AI, according to speakers at the conference.

But the job won’t be easy for CIOs faced with a fast-evolving field where the vendor pecking order is not yet settled, and their technology decisions will have a dramatic impact on software development. An AI infrastructure requires a new development model that requires a more statistical, rather than deterministic, process. On the vendor front, Google’s TensorFlow technology has emerged as an early winner, but it faces production and customization challenges. Making matters more complicated, CIOs also must decide whether to deploy AI infrastructure on private hardware or use the cloud.

New skills required for AI infrastructure

Ashmeet Sidana, chief engineer at  Engineering CapitalAshmeet Sidana

Traditional application development approaches build deterministic apps with well-defined best practices. But AI involves an inherently statistical process. “There is a discomfort in moving from one realm to the other,” Sidana said. Acknowledging this shift and understanding its ramifications will be critical to bringing the enterprise into the machine learning and AI space, he said. 

The biggest ramification is also AI’s dirty little secret: The types of AI that will prove most useful to the enterprise, machine learning and especially deep learning approaches, work great only with great data — both quantity and quality. With algorithms becoming more commoditized, what used to be AI’s major rate-limiting feature — the complexity of developing the software algorithms — is being supplanted by a new hurdle: the complexity of data preparation. “When we have perfect AI algorithms, all the software engineers will become data-preparation engineers,” Sidana said.

Then, there are the all-important platform questions that need to be settled. In theory, CIOs can deploy AI workloads anywhere in the cloud, as cloud providers like Amazon, Google and Microsoft, to name just some, can provide almost bare-metal GPU machines for the most demanding problems. But conference speakers stressed the reality requires CIOs to carefully analyze their needs and objectives before making a decision.

TensorFlow examined

There are a number of deep learning frameworks, but most are focused on academic research. Google’s is perhaps the most mature framework from a production standpoint, but it still has limitations, AI experts noted at the conference.

Eli David, CTO at Deep InstinctEli David

Eli David, CTO of Deep Instinct, a startup based in Tel Aviv that applies deep learning to cybersecurity, said TensorFlow is a good choice when implementing specific kinds of well-defined workloads like image recognition or speech recognition.

But he cautioned it requires heavy customization for seemingly simple changes like analyzing circular, rather than rectangular, images. “You can do high-level things with the building blocks, but the moment you want to do something a bit different, you cannot do that easily,” David said.

The machine learning platform that Deep Instinct built to improve the detection of cyberthreats by analyzing infrastructure data, for example, was designed to ingest a number of data types that are not well-suited to TensorFlow or existing cloud AI services. As a result, the company built its own deep learning systems on private infrastructure, rather than running it in the cloud.

“I talk to many CIOs that do machine learning in a lab, but have problems in production, because of the inherent inefficiencies in TensorFlow,” David said. He said his team also encountered production issues with implementing deep learning inference algorithms based on TensorFlow on devices with limited memory that require dependencies on external libraries. As more deep learning frameworks are designed for production, rather than just for research environments, he said he expects providers will address these issues.

Separate training from deployment

It is also important for CIOs to make a separation between training and deployment of deep learning algorithms, said Evan Sparks, CEO of San Francisco-based Determined AI, a service for training and deploying deep learning models. The training side often benefits from the latest and fastest GPUs.  Deployments are another matter. “I pushed back on the assumption that deep learning training has to happen in the cloud. A lot of people we talk to eventually realize that cloud GPUs are five to 10 times more expensive than buying them on premise,” Sparks said.

Deployment targets can include web services, mobile devices or autonomous cars. The latter may have power, processing efficiency and latency issues that may be critical and might not be able to depend on a network. “I think when you see friction when moving from research to deployment, it is as much about the researchers not designing for deployment as limitations in the tools,” Sparks said. 

Entry-level Dell EMC SC Series adds data features

Dell EMC is trying to move its entry-level hybrid SC Series up the stack by adding features previously available only on higher-end models.

The Dell EMC SCv3000 line introduced this week replaces the SCv2000 building block. Dell EMC SC Series midrange storage is derived from Compellent array technology Dell acquired in 2011, and it’s among a handful of competing SAN array platforms inside the company since the 2016 Dell-EMC merger.

The SCv3000’s 3U form factor accepts 222 drives and scales to 1 PB of raw hybrid capacity per array. Two base models are available: the SCv3000 and SCv3020, each of which supports three new expansion enclosures. Customers can combine the base configuration and populate it with a mix of disk and flash. New features include data federation and protection and greater third-party software integration.

Brian Henderson, a director of midrange marketing for Dell EMC SC Series, said the merged vendor plans to continue adding data management features to carve a bigger slice of the midrange market. Dell EMC SC Series arrays compete with systems such as Hewlett Packard Enterprise MSA models and lower ends of the Hitachi Data Systems’ Virtual Storage Platform.

Even though Dell EMC SC Series and Unity both address the low-end and midrange markets, Henderson said Dell EMC plans to keep both product sets for now. “SC Series and Unity each have their own brands and their own set of customers,” he said. “We’re not taking anything away.

“The big deal on the SCv3000 is that we took the restraints off the system. The SCv2000 was a competitive product, but it was a little light on the feature set. We’ve added more horsepower to support a greater range of enterprise-class features,” Henderson added.

The SCv3000 launch also kicks off a Dell EMC money-back guarantee on certain flash products, starting with SC Series and Unity products. The vendor guarantees storage efficiency will improve at least 75%.

The guarantee eventually will extend to XtremIO and all-flash VMAX systems, Henderson said.

Compellent technology evolves with SCv3000 launch

The upgrade allows users to federate data moved across multiple SC arrays and replicate snapshots between the SC and Dell EMC PS Series — formerly EqualLogic — arrays.

All SC Series arrays run Dell Storage Manager, which supports third-party software integration, quality of service and integration of VMware VVOLs. A single SCv3000 supports 4,000 array-based snapshots.

Dell EMC Core OS supports array-based snapshots, synchronous volume failover between nodes and federated data movement. Dell EMC Data Progression manages the placement of data based on policy-driven usage patterns. Those data features previously were available only on Dell EMC SC Series SC5020, SC7020 and SC9000 arrays.

The SCv3000 integrates compression, but does not include data deduplication, which is available on larger SC models. Henderson said Dell EMC is offering a range of deployment services for midrange customers that want to install SC Series storage on their own.

Dell EMC said the starting price for the SCv3000 is less than $10,000.

Server Core installation offers perks, challenges for IT

Windows Server is a crucial part of the software stack, but the full OS can be overkill for certain enterprise workloads.

Microsoft removed the GUI in the Nano Server and Server Core installation options of Windows Server 2016 to cut the number of running services and processes. Because the smaller OS requires fewer resources, this frees more of the server’s RAM and compute power to operate more demanding workloads or additional VMs.

Microsoft estimates the virtual hard disk size for a full Windows Server 2016 installation at just over 10 GB, while a Server Core installation takes up slightly more than 6 GB of disk space. The minimal deployment footprint reduces the attack surface, which cuts down the time IT departments spend installing security updates.

Microsoft intends to remove the infrastructure role capabilities from Nano Server in the September 2017 semiannual channel update to further optimize that OS for container use. This leaves Server Core as administrators’ sole minimal-footprint option for general-purpose server deployments. Here are the system requirements, roles and challenges associated with a Server Core installation.

Typical Server Core uses

Microsoft recommends the following roles for a Server Core installation:

  • Active Directory (AD) Certificate Services;
  • AD Domain Services;
  • AD Lightweight Directory Services;
  • AD Rights Management Services;
  • Dynamic Host Configuration Protocol Server;
  • Domain Name System Server;
  • File Services;
  • Hyper-V;
  • Licensing Server;
  • Print and Document Services;
  • Remote Desktop Services Connection Broker;
  • Routing and Remote Access Server;
  • Streaming Media Services;
  • Web Server (including a subset of ASP.NET);
  • Windows Server Update Server; and
  • Volume Activation Services.

For workloads that do not require a GUI, use a lab to test the installation and functionality of Server Core, the workload and the associated management tools before a move to the live environment.

System requirements for a Server Core installation

While administrators can follow Microsoft’s minimum requirements for a Windows Server 2016 installation, that leaves few host resources available to properly run a workload — or multiple workloads in VMs.

Microsoft refrains from system requirement recommendations because not all server roles need the same amount of resources. Administrators should run a test deployment to measure if the workload runs properly under a certain configuration and adjust if necessary.

Microsoft intends to remove the infrastructure role capabilities from Nano Server to further optimize the OS for container deployments. This leaves Server Core as administrators’ sole minimal-footprint option for general-purpose server deployments.

The minimum system requirements listed below are the same to install Server Core, Server with Desktop Experience — the full GUI version — and Nano Server for both Standard and Datacenter editions of Windows Server 2016.

CPU: Windows Server 2016 needs a 1.4 GHz 64-bit processor with an x64 instruction set. The processor must support additional feature sets, including:

  • No-eXecute on Advanced Micro Devices processors and eXecute Disable on Intel CPUs, which stop code execution in certain memory areas;
  • data execution prevention, which runs additional memory checks to prevent malicious code; and
  • second-level address translation support, which virtualizes memory space to reduce hypervisor overhead.

In addition, the processor must support:

  • the CMPXCHG16B instruction for high-performance data operations;
  • Load AH from Flags and Store AH to Flags commands, which load and store instructions for virtualization and floating-point conditions; and
  • the PrefetchW instruction, which carries data closer to the CPU before a write.

Those are just the single-core clock and compatibility requirements. The number of processor cores — and the cache size in each core — affects overall performance. A processor with several cores and a larger cache supports more VMs.

Memory: Windows Server 2016 requires a minimum of 512 MB with error-correcting code or a similar technology. To create a VM, designate at least 800 MB or the setup will fail; after it’s installed, lower the RAM allocation as needed.

Network adapter: Network adapters must support a minimum of 1 Gigabit Ethernet bandwidth and the preboot execution environment feature. The network adapter has to conform to the Peripheral Component Interconnect (PCI) Express design. Organizations that will run multiple VMs on a server can install more than one network adapter on the host to avoid a single point of failure.

Storage and storage controllers: Windows Server 2016 requires at least 32 GB of disk storage but will need more space if the installation occurs over a network.

Plot out additional storage for dump files, paging and hibernation. However, snapshot and replication features need more disk space when a VM uses Windows Server 2016 as the guest OS.

The server storage adapter must use the PCI Express architecture. Windows Server 2016 does not support the following storage interfaces for its data, boot or page drives: Advanced Technology Attachment, Parallel ATA, Integrated Drive Electronics and Enhanced IDE.

Trusted Platform: A Trusted Platform Module (TPM) chip is not necessary to install the OS, but security features, such as BitLocker Drive Encryption, require TPM version 2.0 or later. Systems that meet TPM 2.0 need SHA-256 platform configuration register banks.

Deploy and manage Server Core

The setup wizard performs a clean installation of the Windows Server 2016 OS. A dialog box offers the choice to use the full version of Windows Server with the GUI or Server Core.

Because Server Core lacks a GUI, administrators cannot monitor or manage those deployments with the graphical management tools, such as Server Manager, familiar to most Windows shops. Instead, they control Server Core through a command prompt with PowerShell or with Remote Server Administration Tools (RSAT).

PowerShell cmdlets let administrators install, uninstall and configure Server Core. Automate complex Server Core configuration tasks with PowerShell scripts, rather than clicking through a GUI to accomplish the task.

RSAT includes a mix of tools, such as Microsoft Management Console snap-ins, Windows PowerShell cmdlet modules and command-line utilities, to oversee Server Core roles and features. RSAT does not run on Windows Server; it only operates on supported client systems.

Potential trouble spots with Server Core

While Server Core is a fully functioning version of Windows Server 2016, there are several differences that could pose management difficulties for admins unfamiliar with the compact OS.

Users cannot convert a Server Core installation to a Server with Desktop Experience version. That conversion was possible with some earlier versions of Windows Server, but organizations that build a Server Core workload and then decide to switch to the full Windows Server 2016 option need to perform a clean installation. This reinstallation and reconfiguration process can cause downtime.

There are also risks and potential troubleshooting issues with Server Core management via the command line. Even the most skilled IT professionals type in the wrong PowerShell command and cause errors from time to time. Despite Server Core’s advantages, many organizations prefer the familiar GUI administrative tools in the full Windows Server installation.

Expect service providers to ease Azure Stack deployment

Microsoft is about to release Azure Stack, after two years and many bumps in the road. Despite the hoopla, it’s unclear just how many customers will be there to warmly greet the new arrival.

Microsoft has said that Azure Stack offers both infrastructure as a service (IaaS) and platform-as-a-service capabilities. As such, it brings the perks of the cloud service down into the data center. This might tempt businesses long frustrated with tangled, difficult-to-manage multicloud setups, said Mike Dorosh, an analyst at Gartner.

Dorosh said that, given the product’s complex licensing terms, he doubts many IT shops would opt for an Azure Stack deployment directly from a Microsoft hardware partner — at least initially. Dell EMC, Hewlett Packard Enterprise, Lenovo, Avanade and Huawei offer Azure Stack hardware bundles.

Microsoft designed Azure Stack deployment to be a simple process. Jeffrey Snover, a Microsoft technical fellow, said the installation should be quick and its complexity largely obscured by Microsoft and the hardware vendor. But Dorosh also said he predicts it will test businesses as they attempt to migrate and refactor existing apps and develop and deploy new apps onto Azure Stack.

“Then, the challenge becomes: You don’t have the skills and the tools and the knowledge or the staff to work it,” Dorosh said.

Other factors will likely slow initial adoption. Businesses that have recently invested in a private cloud or their infrastructure won’t replace these new investments with Azure Stack, Dorosh said. He also expects to hear concern about licensing and the speed of Microsoft’s updates.

Questions linger on Microsoft licensing

Azure Stack could confuse customers with its different fee models. Microsoft uses a consumption model for five Azure Stack services: Base virtual machine; Windows Server virtual machine; Azure Blob Storage; Azure Table and Queue Storage; and Azure App Service. Businesses can use existing licenses to reduce costs.

A company can subscribe to Azure Stack on a base VM charge of $0.008 per virtual CPU per hour or $6 per vCPU per month. Without a license, a Windows Server VM will cost $0.046 per vCPU per hour or $34 per vCPU per month. There are also options for when there is no public internet connection, called disconnected, and fixed-fee models. An IaaS package costs $144 per core per year, and adding an app service brings it to $400 per core per year.

Dorosh said he expects businesses to get better terms from Microsoft on Azure Stack deployment than with similar offerings, such as Azure Pack, because it will bundled into the product. However, Microsoft must also streamline its licensing terms to avoid confusion. For example, if a service provider has an SQL database with multiple SQL licenses, it will need to translate those licenses to the Azure Stack model.

“[Microsoft used to say] it depends on where you bought it and which programs you bought it under,” Dorosh said. “But now, [customers] want to know, ‘Can I move my SQL license or not? Yes or no?'”

Customers must also make frequent updates to Azure Stack to continue to receive support. A company must apply a Microsoft update within six months, but service providers want Microsoft to push adopters to stay within two months of the regular patches, Dorosh said. Falling six months behind would leave both service providers and Azure Stack users at a disadvantage.

“The further you fall behind, the less Azure you are,” Dorosh said. “You’re no longer part of the Azure cloud family — you’re Azure-like.”

More Azure Stack coverage

  • One size won’t fit all for Azure Stack debut: Initially, Azure Stack will only be offered as a one-rack deployment. Microsoft said it might extend to multirack deployments by early 2018. For now, the one-rack deployment could dampen interest in Azure Stack at larger businesses that don’t want to extend hosting into the Azure public cloud.
  • Analysts say Azure Stack will outpace VMware on Amazon Web Services: Both Azure Stack and VMware Cloud on AWS are expected to hit the hybrid cloud technology market in September. Even though VMware Cloud on AWS targets the world’s largest cloud service provider, analysts expect Azure Stack to sell better. A leading reason is that many Azure Stack customers will be migrating data with one vendor — from a Microsoft-operated data center to the Azure public cloud — while VMware Cloud on AWS requires you to use technologies from different vendors.
  • Azure Stack architect addresses delay: When Microsoft first announced Azure Stack in May 2015, the plan was to release it by the end of 2016. The company then pushed the release to September 2017. Snover, the Azure Stack architect, told SearchWindowsServer in June that the code was not ready for the original launch date. “As much as possible, we are trying to be Azure-consistent,” he said, and the effort to convert Azure to work on premises required more time.
  • Azure Stack isn’t a steppingstone to public cloud: Microsoft anticipates its Azure Stack customers will be businesses that have a long-term plan for hybrid cloud deployment. Although you could use Azure Stack as a “migration path to the cloud,” as Julia White, Microsoft corporate vice president for Azure, put it, the software provider’s internal research suggests that won’t be the case: Eighty-four percent of customers have a hybrid cloud strategy, and 91% of them look at hybrid cloud computing as a long-term workflow. Microsoft expects companies with data sovereignty issues will look to Azure Stack as a way to get cloud computing while keeping data in-house.