Category Archives: Hyper-V

The Complete Guide to Azure Virtual Machines: Part 1

Azure Virtual Machines make an already hugely flexible technology in virtualization even more adaptable through remote hosting.

Virtual machines are a part of Azure’s Infrastructure as a Service (IaaS) offering that allows you to have the flexibility of virtualization without having to invest in the underlying infrastructure. In simpler words, you are paying Microsoft to run a Virtual Machine of your choosing in their Azure environment while they provide you access to the VM.

One of the biggest misconceptions I see in the workplace is that managing Cloud Infrastructure is the same as or very similar to managing on-premise infrastructure. THIS IS NOT TRUE. Cloud Infrastructure is a whole new ball game. It can be a great tool in our back pockets for certain scenarios but only if used correctly. This blog series will explain how you can determine if a workload is suitable for an Azure VM and how to deploy it properly.

Why Use Azure Virtual Machines Over On-Premise Equipment?

One of the biggest features of the public cloud is its scalability. If you write an application and need to scale up the resources dramatically for a few days, you can create a VM in Azure, install your application, run it in there and turn it off when done. You only pay for what you use. If you haven’t already invested in your own physical environment this is a very attractive alternative. The agility this solution provides software developers is on a whole new level compared to before and enables companies to become more efficient at creating applications, and being able to scale when desired is huge.

Should I Choose IaaS or PaaS?

When deploying workloads in Azure, it is important to determine whether or not an application or service should be run using Platform as a Service (PaaS) or a Virtual Machine (IaaS). For example, let’s say you are porting an application into Azure that runs on SQL. Do we want to build a Virtual Machine and install SQL or do we want to just leverage Azure’s PaaS services and just use one of their SQL instances? There are many factors in deciding whether or not to use PaaS or IaaS but one of the biggest is, how much control do you require for your application to run effectively. Do you need to make a lot of changes to the registry and do you require many tweaks within the SQL install? If so, then the virtual machine route would seem a better fit.

How To Choose The Right Virtual Machine Type

In Azure, the Virtual Machine resource specifications are cookie cutter. You don’t get to customize down to the details of how much CPU and Memory you want. They come in an offering of different sizes and you have to make those resource templates work for your computing needs. Making sure the correct size of VM is selected is crucial in Azure, not only because of performance implications for your applications but also because of the pricing. You don’t want to be paying more for a VM that is too large for your workloads.

Make sure you do your homework to determine which size is right for your needs. Also, pay close attention to i/o requirements. Storage is almost always the most common performance killer, so do your due diligence and make sure you’re getting the VM with the proper IOPS (Input/Output Operations per  Second) requirements. For Windows licensing, Microsoft covers the license and the Client Access License if you’re running a VM that needs CALs. For Linux VMs the licensing differs per the distribution.

Before we go and create a Virtual Machine inside Azure, let’s go over one of the gotchas that you might run into if you’re not aware. In Azure, since everything is “pay as you go”, if you’re not aware of the pricing at all times, you or your company may be getting a hefty bill from Microsoft. One of the common mistakes with VMs is that If you don’t completely remove your VM you can still get a charge. Simply just shutting down the VM will not stop the meter from running – you’re still reserving the hardware space from Microsoft so you’ll still be billed. Also when you delete the VM, you are going to have to delete the managed disk as well separately. The VM itself is not the only cost applied when running virtual machines.

Getting Started – Creating the Virtual Network

We will now demonstrate how to configure a Virtual Machine on Azure and getting connected to it. First, we will need to create the virtual networking so that the VM has some sort of network to talk out on. Afterward, we will create the Network Security Group which is like the “firewall” to the VM, and then finally we will create the VM itself. To create the Virtual Network, log into the Azure Portal and select “Create a Resource”. Then click on Networking > Virtual Network:

Azure Virtual Machines

Now we can specify the settings for our Virtual Network. First, we’ll give it a name. I’ll call mine “LukeLabVnet1”. I’ll leave the address space default here but we could make it smaller if we chose too. Then we will select our subscription type. You can use multiple subscriptions for different purposes, like a Development subscription and a Production subscription. Resource groups are a way for you to manage and group together your Azure resources for billing, monitoring, and to access control purposes. We already have a resource group created for this VM and its components so I will go ahead and select that. If we wanted, we could create a new one on the fly here. Then, we fill in the time zone which is Eastern for me. Next, we’ll give the subnet a name because we can create multiple subnets on this virtual network later, I’ll call it “LukeLabSubnet”. I’ll leave the default Address space for the subnet out since we are just configuring one VM and setting up access to it. Once we are done we will hit “create:

Now, to get to our newly created Virtual Network, on the left-hand side of the portal we select “Virtual Networks” and click on the one we just deployed:

We can configure all of our settings for our Virtual Network here. However, for the simplicity of the demonstration we will leave everything how it is for now:

Now that we have our virtual network in place, we will need to create our Network Security Group and then finally deploy our VM which will we do in part 2 of this series. As you can see there are a lot of components to learn when deploying VMs in Azure.

Comments/Feedback?

If you’re unsure about anything stated here let me know in the comments below and I’ll try to explain it better.

Have you tried Azure Virtual Machines? Let us know your verdict!

Hyper-V Quick Tip: How to Choose a Live Migration Performance Solution

Q: Which Hyper-V Live Migration Performance Option should I choose?

A: If you have RDMA-capable physical adapters, choose SMB. Otherwise, choose Compression.

You’ve probably seen this page of your Hyper-V hosts’ Settings dialog box at some point:

live migration performance

The field descriptions go into some detail, but they don’t tell the entire story.

Live Migration Transport Option: TCP/IP

In this case, “TCP/IP” basically means: “don’t use compression or SMB”. Prior to 2012, this was the only mode available. A host opens up a channel to the target system on TCP port 6600 and shoots the data over as quickly as possible.

Live Migration Transport Option: Compression

Introduced in 2012, the Compression method mostly explains itself. The hosts still use port 6600, but the sender compresses the data prior to transmission. This technique has three things going for it:

  • The vast bulk of a Live Migration involves moving the virtual machine’s memory contents. Memory contents tend to compress quite readily, resulting in a substantially reduced payload size over the TCP/IP method
  • For most computing systems, the CPU cycles involved in compression are faster and cheaper than the computations needed to break down, transmit, and re-assemble multi-channel TCP/IP traffic
  • Works for any environment
  • Each virtual machine that you move simultaneously (limited by host settings) will get its own unique TCP channel. That gives the Dynamic and Hash load balancing algorithms an opportunity to use different physical pathways for simultaneous migrations

Live Migration Transport Option: SMB

Also new with 2012, the SMB transport method leverages the new capabilities of version 3 (and higher) of the SMB protocol. Two things matter for Live Migration:

  • SMB Direct: Leveraging RDMA-capable hardware, packets transmitted by SMB Direct move so quickly you’d almost think they arrived before they left. If you haven’t had a chance to see RDMA in action, you’re missing out. Unfortunately, you can’t get RDMA on the cheap.
  • SMB Multichannel: When multiple logical paths are available (as in, a single host with different IP addresses, preferably on different networks), SMB can break up traffic into multiple streams and utilize all available routes

Why Should I Favor Compression over SMB?

The SMB method sounds really good, right? Even if you can’t use SMB Direct, you get something from SMB multichannel, right? Well… no… not much. Processing of TCP/IP packets and Ethernet frames has always been intensive at scale. Ordinarily, our server computers don’t move much data so we don’t see it. However, a Live Migration pushes lots of data. Even keeping a single Ethernet stream intact and in order can cause a burden on your networking hardware. Breaking it up into multiple pieces and re-assembling everything in the correct order across multiple channels can pose a nightmare scenario. However, SMB Direct can offload enough of the basic network processing to nearly trivialize the effort. Without that aid, Compression will be faster for most people.

Should I Ever Prefer the Plain TCP/IP Method?

I have not personally encountered a scenario in which I would prefer TCP/IP over the other choices. However, it does cause the least amount of host load. If your hosts have very high normal CPU usage and you want Live Migrations to occur as discreetly as possible, choose TCP/IP. You may add in a QoS layer to tone it down further.

Your Comments

Don’t agree with my assessment or encountered situations which don’t line up with my advice? I’m happy to hear your thoughts on Live Migrations Performance Options and which are the correct solutions to choose in various circumstances. Write to me in the comments below.

If you like this Hyper-V Quick tip, check out another in the Hyper-V Quick Tip Series.

Insider preview: Windows container image

Earlier this year at Microsoft Build 2018, we announced a third container base image for applications that have additional API dependencies beyond nano server and Windows Server Core. Now the time has finally come and the Windows container image is available for Windows Insiders.

Why another container image?

In conversations with IT Pros and developers there were some themes coming up which went beyond the nanoserver and windowsservercore container images:
Quite a few customers were interested in moving their legacy applications into containers to benefit from container orchestration and management technologies like Kubernetes. However, not all applications could be easily containerized, in some cases due to missing components like proofing support which is not included in Windows Server Core.
Others wanted to leverage containers to run automated UI tests as part of their CI/CD processes or use other graphics capabilities like DirectX which are not available within the other container images.

With the new windows container image, we’re now offering a third option to choose from based on the requirements of the workload. We’re looking forward to see what you will build!

How can you get it?

If you are running a container host on Windows Insider build 17704, you can get this container image using the following command:

docker pull mcr.microsoft.com/windows-insider:10.0.17704.1000

To simply get the latest available version of the container image, you can use the following command:

docker pull mcr.microsoft.com/windows-insider:latest

Please note that for compatibility reasons we recommend running the same build version for the container host and the container itself.

Since this image is currently part of the Windows Insider preview, we’re looking forward to your feedback, bug reports, and comments. We will be publishing newer builds of this container image along with the insider builds.

Alles Gute,
Lars

What’s New in Windows Server 2019 Insider Preview Build 17692

Windows Server 2019 is set to launch later this year – but exactly what’s in store for the latest version of the popular Microsoft operating system?

We now have access to a new Windows Server 2019 preview build, and this one has some things for Hyper-V users. You can read their official notification here. If you haven’t yet gotten into the preview program yet, you can join up and test out it out – sign up to be an Insider. You might want to put this one directly onto hardware if you have the opportunity. Otherwise, try to use a system capable of nested virtualization. I’ll go over the new offerings released on the latest version: insider preview build 17692.

Ongoing Testing Request

As a reminder, Microsoft has made a request for specific items to test in each build, they want users to specifically test:

  • In-place upgrade from WS2012R2 and/or WS2016
  • Compatibility with applications

I’ve done a little bit of my own testing down this avenue with positive results. My process started from a checkpoint of the original system. For each new build, I revert to the creation point of that checkpoint and install the new build. If you take the effort to do the same, don’t forget to report your findings! To do so you can use the Feedback Application on your Windows 10 desktop, select the server category and then choose the applicable sub-category for your feedback.

Build 17692 Feature 1: Dedicated Hyper-V Server

Hopefully, this is common knowledge now, but Hyper-V ships in its own SKU separate from Windows Server. Up until now, the preview builds of Windows Server 2019 only included the full server product. Insider preview build 17692 now has the separate Hyper-V Server product.

17692 does not include any specific functionality changes for Hyper-V. If you will use Hyper-V Server in your environment, you can start testing it now.

Build 17692 Feature 2: System Insights

In the shortest form, the System Insights feature automates a lot of the difficulty in gathering performance data and using it to predict future behavior and needs. Azure offers similar functionality and features, but Systems Insights requires nothing external. The free Windows Admin Center (formerly Project Honolulu) will give you nice charts on current and anticipated performance. You can extend the reporting functionality with response scripts.

Additional Download: Server Core App Compatibility Features on Demand

While not contained within the main bits of the build yet, this has to be one of the coolest features we’ve seen added. Insider preview build 17692 includes the ability to install a Feature on Demand package that will add several of the main Windows Server Management tools that we’ve come to rely on over the years. All of this on Server Core, and without the additional requirements and bloat from the full GUI installation!

insider preview build 17692

Current tools available with this feature are:

  • Performance Monitor (PerfMon.exe)
  • Resource Monitor (Resmon.exe)
  • Device Manager (Devmgmt.msc)
  • Microsoft Management Console (mmc.exe)
  • Windows PowerShell ISE (Powershell_ISE.exe)
  • Failover Cluster Manager (CluAdmin.msc)
  • Process Monitor (Procmon.exe) and other Sysinternals
  • SQL Server Management Studio

Commentary on Windows Server 2019 Insider Preview Build 17692

First, a bit about the split-out of Hyper-V Server. That was an expected move as we near release. Microsoft has not signaled any intent to end SKU separation. Organizations using Hyper-V Server can now start previewing 2019 in the same way that they use previous versions today. You can expect Hyper-V Server to start extending down the same paths as Windows Server. Of course, the Windows Server features won’t appear in Hyper-V Server, but management and reporting capabilities will.

On to the bigger topic of System Insights. If you were looking for a good reason to make the jump to 2019, this might be it. In the past, we had to either guess at utilization or become comfortable with the various performance monitoring tools. Guesses nearly always result in overspending. Proper performance monitoring consumes a lot of time, especially during the analysis phases. Systems Insights eases the data-gathering process. Even better, it leverages modern machine learning technologies to help predict what will sort of needs you’ll have in the future. Windows Admin Center will make the whole thing easy to use.

My primary hope for System Insights is that it will put a knife into one of my personal pet peeves: predatory “consultants” and incompetent “admins”. I commonly see unscrupulous outfits peddling a one-size-fits-all design as a magic bullet to every customer they have. Of course, it always does everything that those customers need because it can handle many times more work than they’ll ever reasonably throw at it. The “consultant” never needs to spend any time on analysis, which saves them money. The “consultant” gets more commission from the bigger system, which makes them money. Their customers don’t know enough to even realize that they’ve been robbed. I see similar behavior from a lot of in-house “admins”. They just tell the business owners to overbuy and their bosses don’t know any different.

Now, with System Insights, the owner of that four-user shop who was conned into buying a full 10GbE infrastructure will be able to open up Windows Admin Center and see how his network utilization only averages .5 Mbps. The operations manager who trusted his IT staff will quickly be able to see that the 40TB system they “absolutely needed” to buy only has 2TB of usage. Sure, the liars will exert more effort and come up with better lies, but they’ll be less effective. And, as time goes on, business owners become more technologically savvy. I hope that we’re seeing the beginning of the end of those unsavory practices.

Of course, not all consultants and admins behave that way. Many genuinely want to perform their duties to the best of their ability, and System Insights will make their jobs much easier. They’ll be able to use it to quickly show their customers and bosses how well they’re doing and use its predictive capabilities to stay ahead of problems — the exact intent behind the feature. I look forward to System Insights as a powerful tool that improves everyone’s experience with Windows Server 2019.

The addition of some of the core management utilities into Windows Server Core is also a welcome addition. The lack of these tools on Server Core frustrated some admins and stifled adoption of Core as the primary server SKU. With them available now it seems Microsoft has seen this gap and is moving to address it.

Conclusion

Overall, another solid release. Things are looking up for when 2019 officially hits the shelves! What are your thoughts on the features and capabilities this build brings? Will you be utilizing these features inside of your datacenter? Whatever direction Windows Server 2019 goes from here, rest assured we will be providing our expert insight and analysis throughout the development and at launch. Watch out for more Windows Server 2019 content on our blog and upcoming content.

Let us know in the comments section below!

8 Key Questions on Cloud Migration Answered

If you follow our blog, you’ll likely know that we recently hosted an Altaro panel-style webinar, featuring Microsoft MVPs Didier Van Hoye, Thomas Maurer, and myself. The topic of the webinar was centered around the journey to the cloud, or simply put, migrating to cloud technologies. Cloud technologies including, on-prem hosted private cloud, hybrid cloud solutions like Azure Stack, and public cloud technologies such as Microsoft Azure. We chose this topic because we’ve found that while most IT Pros will agree that adopting cloud technologies is a good idea, many of them are unsure of the best way to get there. To be honest, I think that uncertainty is to be expected given the vast amount of options emerging cloud technologies provide. The aim of this webinar was to clarify the services available and how to decide which form of cloud adoption will be best for you.

It seems this topic is something quite a lot of our audience are interested in hearing more about considering the number of questions which were asked during the webinar. I’ve decided to group the most commonly asked ones here and omitted the more specific questions that relate to particular set-ups and individual requirements. Apart from the questions, the topic also raised a lot of comments and discussion which I think is well worth mentioning here so you can get a feel about how others in the IT community are dealing with the issue of cloud migration and the various concerns it brings with it (further down the page).

Remember if you didn’t have a chance to ask a question during the webinar, or if you were unable to attend and want to ask something now, I will be more than happy to answer any questions submitted through the comment box at the bottom of this page.

Revisit the Webinar

If you haven’t already watched the webinar (or if you just want to watch it again) you can do so HERE

Free Cloud Migration webinar

8 Questions on Cloud Technologies and Migration Answered

Q. When can you consider a deployment a hybrid cloud? Is it Azure Stack? Is it something as simple as a VPN linking on-prem and a public cloud?

A. I don’t know if there is an official definition, but the current industry opinion would state that a hybrid cloud is any deployment where your workloads and deployments are stretching from on-prem to a public cloud player such as Azure or AWS.

Q. With the release of Windows Admin Center, will we see the RSAT (Remote Server Administration Toolkit) tools go away?

A. No. At this time both management solutions will be developed by their respective teams. With that said, if the adoption of WAC is strong enough, we could potentially see the slow “phasing out” of RSAT possibly as soon as the next version of Windows Server (after 2019)

Q. Is there any way to connect containers to Windows Admin Center?

A. As it stands at the time of this writing no. There currently is no mechanism to manage containers from WAC. With that said, due to WACs extensibility it’s not out of the question for a 3rd party vendor (or even Microsoft) to write an extension for WAC that would allow you to do so.

If you need advanced management of containers today, take a look at an orchestration tool like Kubernetes.

Q. How does Azure Stack compete with current open source private clouds in the industry such as OpenStack? Pricing is quite different and some can even be seen as “free” by higher management while disregarding the needed effort to support such a deployment.

A. While it’s true that OpenStack and other open source cloud platforms like it can potentially be free, it’s not really an apples-to-apples comparison when comparing them to Azure stack. Azure stack is the power and capabilities of Azure inside of your datacenter. Microsoft has taken everything they’ve learned with public Azure and packaged it up for you to use at your location. You manage it and get billed, much the same way as with Azure. You manage it via the web and get billed per usage.

OpenStack certainly has it’s uses, and I’m a huge supporter of Open Source, but if you’re a Microsoft Centric shop looking to host a cloud for your organization, it’s tough to go wrong with Azure Stack due to the similarities in management and integration with public Azure. At the end of the day you have to ask yourself, do you want to use/consume cloud services? Or do you want to build a cloud? Remember that building a cloud is difficult, costly and time intensive. It’s possible but ongoing management can be difficult. With Azure Stack much of that work and testing is taken care of for you.

Q. Do you have any suggestions for using Azure as a DR site?

A. It certainly is possible to use Azure for DR, and it’s often seen as one of the 1st services to move into the public cloud. You can certainly use Azure to host offsite backups and/or recovery to a nested hypervisor inside of Azure using a product such as Altaro VM Backup. If you need a more “hot” DR approach, you could look at something like Azure Site Recovery as well.

Q. What are your thoughts on using Cloud services to host file services for a small number of remote users

A. While you could certainly use something like OneDrive for Business, or Azure Files to do something like this, you need to first consider latencies and access times. Are your users consuming file types that work ok with longer than “local” latencies? If so these services may work for you. If not local on-network file storage may still be a requirement. Whatever route you chose to go, remember that file performance is often one of the most ticket generating user issue. Make sure you test before settling on an option.

Q. What are the rough costs for storage in Azure?

A. See the Azure Pricing Calculator for the latest pricing information

Q. Is there are “Cost Meter” in the Azure Interface? Someway of allowing you to keep an eye on mounting costs?

A. This is an area that Microsoft has continued to improve. The Azure Portal has many of its own cost monitoring and estimation tools, but if you need more than the basics, then take a look at Azure Cost Management.

Thoughts and Opinions from Webinar Attendees

On companies utilizing cloud technologies

“I Agree. It’s almost never going to be 100% cloud, except with brand new companies, and even then, a small number. 99.99% Will be Hybrid”

-Mark

“I Think we are ready for the cloud, but a lot of delay is being caused by software vendors. They are not ready for the cloud since their software was developed in the late 90s and the recent updates only contain updated branding and minor code changes. The cloud is entirely new for them and it scares them”

-Jos

On moving existing workloads to the cloud and dealing with old Operating Systems

“There is way too much very old stuff that will be difficult to move to the cloud. It will have to wait until there are resources (read: money) to re-architect the application/platform”

-Mark

“There are a LOT of 2003 boxes running in production still”

-Mark

“There are even still Windows NT Boxes running!”

-Jos

“We have some old NT and 2003 servers due to old technology interfaces, plus the original designers have left and there is no documentation”

-Steve

On the Need for On-Premises Equipment

“On-Premises data will always be required due to local/country laws. Think Switzerland, and think of the new GDPR laws in the EU. Almost every country will have their own local data center, Azure, AWS, Google…etc. It is the way it will go.”

-Mark

On the DevOps Movement in the Industry

“Don’t forget the OPS in DevOPS…. We are also interested in it and it is no longer strictly a Dev thing.”

-Nuno

On Container Technologies

“Still, A container is the App “package”. It still needs to run on something and while it can accelerate the delivery process, there’s still a huge dependency on the infrastructure landscape and IMO it’s really where Ops can shine and their current knowledge can translate into the container world”

-Nuno

On Getting Started with the Cloud

“Very good point about doing your personal systems in the cloud. I agree and am doing it also.”

-Mark

Wrap-Up

As you can see there are plenty of questions when it comes to moving to the cloud, but none of them are insurmountable. Moving to the cloud can be predictable, and doable, you just need to do your homework before you make the move.

What are your thoughts? Is the cloud something you’re considering in the 2018 calendar year? Why? Why not? Also, if you have additional questions, or you attended our webinar and don’t see your question above, be sure to let us know in the comments form below!

Thanks for reading!

An Introduction to the Microsoft Hybrid Cloud Concept and Azure Stack

In the last years, so-called “cloud services” have become more and more interesting and some customers are already thinking of going 100% cloud. There are a lot of competing cloud products out there, but is there a universal description of a cloud service? This is what I will address here.

Let’s start with the basics. Since time began (by that I mean “IT history”) we have all been running our own servers in our own datacenters with our own IT employees. A result of this was that you had different servers for your company, all configured individually, and your IT guys had to deal with that high number of servers. This led to a heavy and increasing load on the IT administrators, no time for new services, often they even had no time to update the existing ones to mitigate the risk to be hacked. In parallel, the development teams and management expect IT to behave in an agile fashion which was impossible for them.

Defining Cloud Services

This is not a sustainable model and is where the cloud comes in. A cloud is a highly optimized standard service (out of the box) without any small changes in the configuration. Cloud Services provide a way to just use a service (compared to power from the power plug) with a predefined and guaranteed SLA (service level agreement). If the SLA breaks, you as the customer would even get money back. The issue with these services is that these servers need to run in a highly standardized setup, in highly standardized datacenters, which are geo-redundant around the world. When it comes to Azure, these datacenters are being run in so-called “regions” with a minimum of three datacenters per region.

In addition to this, Microsoft runs their own backbone (not the internet) to provide a high quality of services. Let’s say available bandwidth meets Quality of Services (QoS).

To say it in one sentence, a cloud service is a highly standardized IT service with guaranteed SLAs running in public datacenters available from everywhere around the world at high quality. In general, from the financial point of view, you pay it per user, services or other flexible unit and you could increase or decrease it, based on your current needs.

Cloud Services – your options

If you want to invest in cloud services, you will have to choose between:

  • A private Cloud
  • A public Cloud
  • A hybrid Cloud

A private cloud contains IT services provided by your internal IT team, but in a manner, you could even get as external service. It is being provided by your datacenter and only hosts services for your company or company group. This means you will have to provide the required SLA.

A public cloud describes IT services provided by a hosting service provider with a guaranteed SLA. The services are being provided by public datacenters and they are not being spun up individually just for you.

A hybrid cloud is a mixture between a public and a private cloud, or in other words “a hybrid cloud is an internet-connected private cloud with services that are being consumed as public cloud services”. Hybrid Cloud deployments can be especially useful if there is a reason not to move a service to a public cloud such as:

  • Intellectual property needs to be saved on company-owned dedicated services
  • Highly sensitive data (e.g. health care) is not allowed to be saved on public services
  • Lack of connectivity could break the public cloud if you are in a region with poor connectivity

Responsibility for Cloud Services

If you decide to go with public cloud services, the question is always how many of your network services are you willing to move to the public cloud?

The general answer should be the more services you can transfer to the cloud, the better your result. However, even the best-laid plans sometimes can be at the mercy of your internet connectivity as well, which can cut you off from these services if not planned for. Additionally, industry regulations have made a 100% cloud footprint difficult for some organizations. The hybrid solution is then the most practical option for the majority of business applications.

Hybrid Cloud Scenarios

These reasons drove the decision by Microsoft to provide Azure to you for your own datacenter in a packaged solution based on the same technology as within Azure. Azure itself has the main concept of working with REST-Endpoints and ARM templates (JSON files with declarative definitions for services). Additionally, Microsoft deemed that this on-premises Azure solution should not provide only IaaS, it should be able to run PaaS, too. Just like the public Azure cloud.

This basically means, that for a service to become available in this new on-prem “Azure Stack”, it must already be generally available (GA) in public Azure.

This solution is called “Azure Stack” and comes on certified hardware only. This makes sure, that you as the customer will get performance, reliability and scalability. That ones you expect from Azure will be with Azure Stack, too.

As of today, the following Hardware OEMs part of this initiative:

  • DELL
  • HPE
  • Lenovo
  • Cisco
  • Huawei
  • Intel/Wortmann
  • Fujitsu

The following services are available with Azure Stack today, but as it is an agile product from Microsoft, we will expect MANY interesting updates in the future.

With Azure Stack, Microsoft provides a simple way to spread services between on-premise and in the public cloud. Possible scenarios could be:

  • Disconnected scenarios (Azure Stack in planes or ships)
  • Azure Stack as your development environment for Azure
  • Low latency computing
  • Hosting Platform for MSPs
  • And many more

As we all know, IT is hybrid today in most of the industries all over the world. With the combination of Azure Stack and Azure, you will have the chance to fulfill the requirements and set up a unique cloud model for all of your company services.

Summary

As you have seen, Azure Stack brings public Azure to your datacenter with the same administration and configuration models you already know from public Azure. There is no need to learn twice. Training costs go down, the standardization gives more flexibility and puts fewer loads on the local IT Admins which gives them time to work on new solutions for better quality. Also, with cloud style licensing things becomes less complex, as things are simply based on a usage model. You could even link your Azure Stack licenses directly to an Azure Subscription.

As hybrid cloud services are the future for the next 10 years or even more, Azure and Azure Stack together can make your IT world the most successful that it ever was in the last 10 years and moving forward.

If you want to learn more about Azure stack, watch our webinar Future-proofing your Datacenter with Microsoft Azure Stack

How about you? Does your organization have interest in Azure Stack? Why or why not? We here on the Altaro Blog are interested! Let us know in the comments section below!

Thanks for reading!

How to Architect and Implement Networks for a Hyper-V Cluster

We recently published a quick tip article recommending the number of networks you should use in a cluster of Hyper-V hosts. I want to expand on that content to make it clear why we’ve changed practice from pre-2012 versions and how we arrive at this guidance. Use the previous post for quick guidance; read this one to learn the supporting concepts. These ideas apply to all versions from 2012 onward.

Why Did We Abandon Practices from 2008 R2?

If you dig on TechNet a bit, you can find an article outlining how to architect networks for a 2008 R2 Hyper-V cluster. While it was perfect for its time, we have new technologies that make its advice obsolete. I have two reasons for bringing it up:

  • Some people still follow those guidelines on new builds — worse, they recommend it to others
  • Even though we no longer follow that implementation practice, we still need to solve the same fundamental problems

We changed practices because we gained new tools to address our cluster networking problems.

What Do Cluster Networks Need to Accomplish for Hyper-V?

Our root problem has never changed: we need to ensure that we always have enough available bandwidth to prevent choking out any of our services or inter-node traffic. In 2008 R2, we could only do that by using multiple physical network adapters and designating traffic types to individual pathways. Note: It was possible to use third-party teaming software to overcome some of that challenge, but that was never supported and introduced other problems.

Starting from our basic problem, we next need to determine how to delineate those various traffic types. That original article did some of that work. We can immediately identify what appears to be four types of traffic:

  • Management (communications with hosts outside the cluster, ex: inbound RDP connections)
  • Standard inter-node cluster communications (ex: heartbeat, cluster resource status updates)
  • Cluster Shared Volume traffic
  • Live Migration

However, it turns out that some clumsy wording caused confusion. Cluster communication traffic and Cluster Shared Volume traffic are exactly the same thing. That reduces our needs to three types of cluster traffic.

What About Virtual Machine Traffic?

You might have noticed that I didn’t say anything about virtual machine traffic above. Same would be true if you were working up a different kind of cluster, such as SQL. I certainly understand the importance of that traffic; in my mind, service traffic prioritizes above all cluster traffic. Understand one thing: service traffic for external clients is not clustered. So, your cluster of Hyper-V nodes might provide high availability services for virtual machine vmabc, but all of vmabc‘s network traffic will only use its owning node’s physical network resources. So, you will not architect any cluster networks to process virtual machine traffic.

As for preventing cluster traffic from squelching virtual machine traffic, we’ll revisit that in an upcoming section.

Fundamental Terminology and Concepts

These discussions often go awry over a misunderstanding of basic concepts.

  • Cluster Name Object: A Microsoft Failover Cluster has its own identity separate from its member nodes known as a Cluster Name Object (CNO). The CNO uses a computer name, appears in Active Directory, has an IP, and registers in DNS. Some clusters, such as SQL, may use multiple CNOs. A CNO must have an IP address on a cluster network.
  • Cluster Network: A Microsoft Failover Cluster scans its nodes and automatically creates “cluster networks” based on the discovered physical and IP topology. Each cluster network constitutes a discrete communications pathway between cluster nodes.
  • Management network: A cluster network that allows inbound traffic meant for the member host nodes and typically used as their default outbound network to communicate with any system outside the cluster (e.g. RDP connections, backup, Windows Update). The management network hosts the cluster’s primary cluster name object. Typically, you would not expose any externally-accessible services via the management network.
  • Access Point (or Cluster Access Point): The IP address that belongs to a CNO.
  • Roles: The name used by Failover Cluster Management for the entities it protects (e.g. a virtual machine, a SQL instance). I generally refer to them as services.
  • Partitioned: A status that the cluster will give to any network on which one or more nodes does not have a presence or cannot be reached.
  • SMB: ALL communications native to failover clustering use Microsoft’s Server Message Block (SMB) protocol. With the introduction of version 3 in Windows Server 2012, that now includes innate multi-channel capabilities (and more!)

Are Microsoft Failover Clusters Active/Active or Active/Passive?

Microsoft Failover Clusters are active/passive. Every node can run services at the same time as the other nodes, but no single service can be hosted by multiple nodes. In this usage, “service” does not mean those items that you see in the Services Control Panel applet. It refers to what the cluster calls “roles” (see above). Only one node will ever host any given role or CNO at any given time.

How Does Microsoft Failover Clustering Identify a Network?

The cluster decides what constitutes a network; your build guides it, but you do not have any direct input. Any time the cluster’s network topology changes, the cluster service re-evaluates.

First, the cluster scans a node for logical network adapters that have IP addresses. That might be a physical network adapter, a team’s logical adapter, or a Hyper-V virtual network adapter assigned to the management operating system. It does not see any virtual NICs assigned to virtual machines.

For each discovered adapter and IP combination on that node, it builds a list of networks from the subnet masks. For instance, if it finds an adapter with an IP of 192.168.10.20 and a subnet mask of 255.255.255.0, then it creates a 192.168.10.0/24 network.

The cluster then continues through all of the other nodes, following the same process.

Be aware that every node does not need to have a presence in a given network in order for failover clustering to identify it; however, the cluster will mark such networks as partitioned.

What Happens if a Single Adapter has Multiple IPs?

If you assign multiple IPs to the same adapter, one of two things will happen. Which of the two depends on whether or not the secondary IP shares a subnet with the primary.

When an Adapter Hosts Multiple IPs in Different Networks

The cluster identifies networks by adapter first. Therefore, if an adapter has multiple IPs, the cluster will lump them all into the same network. If another adapter on a different host has an IP in one of the networks but not all of the networks, then the cluster will simply use whichever IPs can communicate.

As an example, see the following network:

The second node has two IPs on the same adapter and the cluster has added it to the existing network. You can use this to re-IP a network with minimal disruption.

A natural question: what happens if you spread IPs for the same subnet across different existing networks? I tested it a bit and the cluster allowed it and did not bring the networks down. However, it always had the functional IP pathway to use, so that doesn’t tell us much. Had I removed the functional pathways, then it would have collapsed the remaining IPs into an all-new network and it would have worked just fine. I recommend keeping an eye on your IP scheme and not allowing things like that in the first place.

When an Adapter Hosts Multiple IPs in the Same Network

The cluster will pick a single IP in the same subnet to represent the host in that network.

What if Different Adapters on the Same Host have an IP in the Same Subnet?

The same outcome occurs as if the IPs were on the same adapter: the cluster picks one to represent the cluster and ignores the rest.

The Management Network

All clusters (Hyper-V, SQL, SOFS, etc.) require a network that we commonly dub Management. That network contains the CNO that represents the cluster as a singular system. The management network has little importance for Hyper-V, but external tools connect to the cluster using that network. By necessity, the cluster nodes use IPs on that network for their own communications.

The management network will also carry cluster-specific traffic. More on that later.

Note: Replica uses a management network.

Cluster Communications Networks (Including Cluster Shared Volume Traffic)

A cluster communications network will carry:

  • Cluster heartbeat information. Each node must hear from every other node within a specific amount of time (1 second by default). If it does not hear from a minimum of nodes to maintain quorum, then it will begin failover procedures. Failover is more complicated than that, but beyond the scope of this article.
  • Cluster configuration changes. If any configuration item changes, whether to the cluster’s own configuration or the configuration or status of a protected service, the node that processes the change will immediately transmit to all of the other nodes so that they can update their own local information store.
  • Cluster Shared Volume traffic. When all is well, this network will only carry metadata information. Basically, when anything changes on a CSV that updates its volume information table, that update needs to be duplicated to all of the other nodes. If the change occurs on the owning node, less data needs to be transmitted, but it will never be perfectly quiet. So, this network can be quite chatty, but will typically use very little bandwidth. However, if one or more nodes lose direct connectivity to the storage that hosts a CSV, all of its I/O will route across a cluster network. Network saturation will then depend on the amount of I/O the disconnected node(s) need(s).

Live Migration Networks

That heading is a bit of misnomer. The cluster does not have its own concept of a Live Migration network per se. Instead, you let the cluster know which networks you will permit to carry Live Migration traffic. You can independently choose whether or not those networks can carry other traffic.

Other Identified Networks

The cluster may identify networks that we don’t want to participate in any kind of cluster communications at all. iSCSI serves as the most common example. We’ll learn how to deal with those.

Architectural Goals

Now we know our traffic types. Next, we need to architect our cluster networks to handle them appropriately. Let’s begin by understanding why you shouldn’t take the easy route of using a singular network. A minimally functional Hyper-V cluster only requires that “management” network. Stopping there leaves you vulnerable to three problems:

  • The cluster will be unable to select another IP network for different communication types. As an example, Live Migration could choke out the normal cluster hearbeat, causing nodes to consider themselves isolated and shut down
  • The cluster and its hosts will be unable to perform efficient traffic balancing, even when you utilize teams
  • IP-based problems in that network (even external to the cluster) could cause a complete cluster failure

Therefore, you want to create at least one other network. In the pre-2012 model we could designate specific adapters to carry specific traffic types. In the 2012 and later model, we simply create at least one more additional network to allow cluster communications but not client access. Some benefits:

  • Clusters of version 2012 or new will automatically employ SMB multichannel. Inter-node traffic (including Cluster Shared Volume data) will balance itself without further configuration work.
  • The cluster can bypass trouble on one IP network by choosing another; you can help by disabling a network in Failover Cluster Manager
  • Better load balancing across alternative physical pathways

The Second Supporting Network… and Beyond

Creating networks beyond the initial two can add further value:

  • If desired, you can specify networks for Live Migration traffic, and even exclude those from normal cluster communications. Note: For modern deployments, doing so typically yields little value
  • If you host your cluster networks on a team, matching the number of cluster networks to physical adapters allows the teaming and multichannel mechanisms the greatest opportunity to fully balance transmissions. Note: You cannot guarantee a perfectly smooth balance

Architecting Hyper-V Cluster Networks

Now we know what we need and have a nebulous idea of how that might be accomplished. Let’s get into some real implementation. Start off by reviewing your implementation choices. You have three options for hosting a cluster network:

  • One physical adapter or team of adapters per cluster network
  • Convergence of one or more cluster networks onto one or more physical teams or adapters
  • Convergence of one or more cluster networks onto one or more physical teams claimed by a Hyper-V virtual switch

A few pointers to help you decide:

  • For modern deployments, avoid using one adapter or team for a cluster network. It makes poor use of available network resources by forcing an unnecessary segregation of traffic.
  • I personally do not recommend bare teams for Hyper-V cluster communications. You would need to exclude such networks from participating in a Hyper-V switch, which would also force an unnecessary segregation of traffic.
  • The most even and simple distribution involves a singular team with a Hyper-V switch that hosts all cluster network adapters and virtual machine adapters. Start there and break away only as necessary.
  • A single 10 gigabit adapter swamps multiple gigabit adapters. If your hosts have both, don’t even bother with the gigabit.

To simplify your architecture, decide early:

  • How many networks you will use. They do not need to have different functions. For example, the old management/cluster/Live Migration/storage breakdown no longer makes sense. One management and three cluster networks for a four-member team does make sense.
  • The IP structure for each network. For networks that will only carry cluster (including intra-cluster Live Migration) communication, the chosen subnet(s) do not need to exist in your current infrastructureAs long as each adapter in a cluster network can reach all of the others at layer 2 (Ethernet), then you can invent any IP network that you want.

I recommend that you start off expecting to use a completely converged design that uses all physical network adapters in a single team. Create Hyper-V network adapters for each unique cluster network. Stop there, and make no changes unless you detect a problem.

Comparing the Old Way to the New Way (Gigabit)

Let’s start with a build that would have been common in 2010 and walk through our options up to something more modern. I will only use gigabit designs in this section; skip ahead for 10 gigabit.

In the beginning, we couldn’t use teaming. So, we used a lot of gigabit adapters:

There would be some variations of this. For instance, I would have added another adapter so that I could use MPIO with two iSCSI networks. Some people used Fiber Channel and would not have iSCSI at all.

Important Note: The “VMs” that you see there means that I have a virtual switch on that adapter and the virtual machines use it. It does not mean that I have created a VM cluster network. There is no such thing as a VM cluster network. The virtual machines are unaware of the cluster and they will not talk to it (if they do, they’ll use the Management access point like every other non-cluster system).

Then, 2012 introduced teaming. We could then do all sorts of fun things with convergence. My very least favorite:

This build takes teams to an excess. Worse, the management, cluster, and Live Migration teams will be idle almost all the time, meaning that this 60% of this host’s networking capacity will be generally unavailable.

Let’s look at something a bit more common. I don’t like this one either, but I’m not revolted by it either:

A lot of people like that design because, so they say, it protects the management adapter from problems that affect the other roles. I cannot figure out how they perform that calculus. Teaming addresses any probable failure scenarios. For anything else, I would want the entire host to fail out of the cluster. In this build, a failure that brought the team down but not the management adapter would cause its hosted VMs to become inaccessible because the node would remain in the cluster. That’s because the management adapter would still carry cluster heartbeat information.

My preferred design follows:

Now we are architected against almost all types of failure. In a “real-world” build, I would still have at least two iSCSI NICs using MPIO.

What is the Optimal Gigabit Adapter Count?

Because we had one adapter per role in 2008 R2, we often continue using the same adapter count in our 2012+ builds. I don’t feel that’s necessary for most builds. I am inclined to use two or three adapters in data teams and two adapters for iSCSI. For anything past that, you’ll need to have collected some metrics to justify the additional bandwidth needs.

10 Gigabit Cluster Network Design

10 gigabit changes all of the equations. In reasonable load conditions, a single 10 gigabit adapter moves data more than 10 times faster than a single gigabit adapter. When using 10 GbE, you need to change your approaches accordingly. First, if you have both 10GbE and gigabit, just ignore the gigabit. It is not worth your time. If you really want to use it, then I would consider using it for iSCSI connections to non-SSD systems. Most installations relying on iSCSI-connected spinning disks cannot sustain even 2 Gbps, so gigabit adapters would suffice.

Logical Adapter Counts for Converged Cluster Networking

I didn’t include the Hyper-V virtual switch in any of the above diagrams, mostly because it would have made the diagrams more confusing. However, I would use a Hyper-V team to host all of the logical adapters necessary. For a non-Hyper-V cluster, I would create a logical team adapter for each role. Remember that on a logical team, you can only have a single logical adapter per VLAN. The Hyper-V virtual switch has no such restrictions. Also remember that you should not use multiple logical team adapters on any team that hosts a Hyper-V virtual switch. Some of the behavior is undefined and your build might not be supported.

I would always use these logical/virtual adapter counts:

  • One management adapter
  • A minimum of one cluster communications adapter up to n-1, where n is the number of physical adapters in the team. You can subtract one because the management adapter acts as a cluster adapter as well

In a gigabit environment, I would add at least one logical adapter for Live Migration. That’s optional because, by default, all cluster-enabled networks will also carry Live Migration traffic.

In a 10 GbE environment, I would not add designated Live Migration networks. It’s just logical overhead at that point.

In a 10 GbE environment, I would probably not set aside physical adapters for storage traffic. At those speeds, the differences in offloading technologies don’t mean that much.

Architecting IP Addresses

Congratulations! You’ve done the hard work! Now you just need to come up with an IP scheme. Remember that the cluster builds networks based on the IPs that it discovers.

Every network needs one IP address for each node. Any network that contains an access point will need an additional IP for the CNO. For Hyper-V clusters, you only need a management access point. The other networks don’t need a CNO.

Only one network really matters: management. Your physical nodes must use that to communicate with the “real” network beyond. Choose a set of IPs available on your “real” network.

For all the rest, the member IPs only need to be able to reach each other over layer 2 connections. If you have an environment with no VLANs, then just make sure that you pick IPs in networks that don’t otherwise exist. For instance, you could use 192.168.77.0/24 for something, as long as that’s not a “real” range on your network. Any cluster network without a CNO does not need to have a gateway address, so it doesn’t matter that those networks won’t be routable. It’s preferred, in fact.

Implementing Hyper-V Cluster Networks

Once you have your architecture in place, you only have a little work to do. Remember that the cluster will automatically build networks based on the subnets that it discovers. You only need to assign names and set them according to the type of traffic that you want them to carry. You can choose:

  • Allow cluster communication (intra-node heartbeat, configuration updates, and Cluster Shared Volume traffic)
  • Allow client connectivity to cluster resources (includes cluster communication) and cluster communications (you cannot choose client connectivity without cluster connectivity)
  • Prevent participation in cluster communications (often used for iSCSI and sometimes connections to external SMB storage)

As much as I like PowerShell for most things, Failover Cluster Manager makes this all very easy. Access the Networks tree of your cluster:

I’ve already renamed mine in accordance with their intended roles. A new build will have “Cluster Network”, “Cluster Network 1”, etc. Double-click on one to see which IP range(s) it assigned to that network:

Work your way through each network, setting its name and what traffic type you will allow. Your choices:

  • Allow cluster network communication on this network AND Allow clients to connect through this network: use these two options together for the management network. If you’re building a non-Hyper-V cluster that needs access points on non-management networks, use these options for those as well. Important: The adapters in these networks SHOULD register in DNS.
  • Allow cluster network communication on this network ONLY (do not check Allow clients to connect through this network): use for any network that you wish to carry cluster communications (remember that includes CSV traffic). Optionally use for networks that will carry Live Migration traffic (I recommend that). Do not use for iSCSI networks. Important: The adapters in these networks SHOULD NOT register in DNS.
  • Do not allow cluster network communication on this network: Use for storage networks, especially iSCSI. I also use this setting for adapters that will use SMB to connect to a storage server running SMB version 3.02 in order to run my virtual machines. You might want to use it for Live Migration networks if you wish to segregate Live Migration from cluster traffic (I do not do or recommend that).

Once done, you can configure Live Migration traffic. Right-click on the Networks node and click Live Migration Settings:

Check a network’s box to enable it to carry Live Migration traffic. Use the Up and Down buttons to prioritize.

What About Traffic Prioritization?

In 2008 R2, we had some fairly arcane settings for cluster network metrics. You could use those to adjust which networks the cluster would choose as alternatives when a primary network was inaccessible. We don’t use those anymore because SMB multichannel just figures things out. However, be aware that the cluster will deliberately choose Cluster Only networks over Cluster and Client networks for inter-node communications.

What About Hyper-V QoS?

When 2012 first debuted, it brought Hyper-V networking QoS along with it. That was some really hot new tech, and lots of us dove right in and lost a lot of sleep over finding the “best” configuration. And then, most of us realized that our clusters were doing a fantastic job balancing things out all on their own. So, I would recommend that you avoid tinkering with Hyper-V QoS unless you have tried going without and had problems. Before you change QoS, determine what traffic needs to be attuned or boosted before you change anything. Do not simply start flipping switches, because the rest of us already tried that and didn’t get results. If you need to change QoS, start with this TechNet article.

Your thoughts?

Does your preferred network management system differ from mine? Have you decided to give my arrangement a try? How id you get on? Let me know in the comments below, I really enjoy hearing from you guys!

Introduction to Azure Cloud Shell: Manage Azure from a Browser

Are you finding the GUI of Azure Portal difficult to work with?

You’re not alone and it’s very easy to get lost. There are so many changes and updates made every day and the azure overview blades can be pretty clunky to traverse through. However, with Azure Cloud Shell, we can utilize PowerShell or Bash to manage Azure resources instead of having to click around in the GUI.

So what is Azure Cloud Shell? It is a web-based shell that can be accessed via a web browser. It will automatically authenticate with your Azure sign-on credentials and allow you to manage all the Azure resources that your account has access to. This eliminates the need to load Azure modules on workstations. So for some situations where developers or IT Pros require shell access to their Azure resources, Azure Cloud Shell can be a very useful solution, as they won’t have to remote into “management” nodes that have the Azure PowerShell modules installed on them.

Cloud Masterclass webinar

How Azure Cloud Shell Works

As of right now, Azure Cloud Shell gives users two different environments to use. One is a Bash environment, which is basically a terminal connection to a Linux VM in Azure that gets spun up. This VM is free of charge. The second environment available is a PowerShell environment, which runs Windows PowerShell on a Windows Server Core VM. You will need to have some storage provisioned on your Azure account in order to create the $home directory. This acts as the persistent storage for the console session and allows users to upload scripts to run on the console.

Getting Started

To get started using Azure Cloud Shell, go to shell.azure.com. You will be prompted to sign in with your Azure account credentials:

Azure Cloud Shell welcome

Now we have some options. We can select which environment we prefer to run in. We can run in a Bash shell or we can use PowerShell. Pick whichever one you’re more comfortable with. For this example, I’ve selected PowerShell:

Next, we get a prompt for storage, since we haven’t configured the shell settings with this account yet. Simply select the “Create Now” button to go ahead and have Azure create a new resource group, or select “Show Advanced Settings” to configure those settings to your preference:

Once the storage is provisioned, we will wait a little bit for the console to finish loading, and then the shell should be ready for us to use!

In the upper left corner, we have all of the various controls for the console. We can reset the console, start a new session, switch to Bash, and upload files to our cloud drive:

For an example, I uploaded an activate.bat script file to my cloud drive. In order to access it we simply reference $home and specify our CloudDrive:

Now I can see my script:

This will allow you to deploy your custom PowerShell scripts and modules in Azure from any device! assuming you have access to a web browser, of course. Pretty neat!

Upcoming Changes and Things to Note

  • On May 21st, Microsoft announced that they will be going with Linux platform for both the Windows PowerShell and Bash experience. How is this possible? Essentially they will be using a Linux container to host the shell. By default PowerShell Core 6 will be the first experience. They claim that the startup time will be much faster than previous versions because of the Linux container. For switching between bash and PowerShell in the console, simply type “bash”. If you want to go back to PowerShell Core just type “pwsh”.
  • Microsoft is planning on having “persistent settings” for Git and SSH tools so that the settings for these tools are saved to the CloudDrive and users won’t have to hassle with them all the time.
  • There is some ongoing pain with modules currently. Microsoft is still working on porting over modules to .Net Core (for use with Powershell Core) and there will be a transition period while this happens. They are prioritizing the porting of the most commonly used modules first. In the meantime, there is one workaround that many people seem to forget, implicit remoting. This is the process of taking a module that is already installed on another endpoint and importing it into your PowerShell session allowing you to call that module and have it remotely execute on the node where the module is installed. It can be very useful for now until we get more modules converted over to .Net Core.

Want to Learn More About Microsoft Cloud Services?

The development pace of Azure is one of the most aggressive in the market today, and as you can see Azure Cloud Shell is constantly being updated and improved over a short period of time. In the near future, it will most likely be one of the more commonly used methods for interacting with Azure resources. It provides Azure customers with a seamless way of managing and automating their Azure resources without having to authenticate over and over again or install extra snap-ins and modules; and will continually shape the way we do IT today.

What are your thoughts regarding the Azure Cloud Shell? Have you used it yet? What are your initial thoughts? Let us know in the comments section below!

Do you have interest in more Azure Goodness? Are you wondering how to get started with the cloud and move some existing resources into Microsoft Azure? We actually have a panel styled webinar coming up in June that addresses those questions. Join Andy Syrewicze, Didier Van Hoye, and Thomas Maurer for a crash course on how you can plan your journey effectively and smoothly utilizing the exciting cloud technologies coming out of Microsoft including:

  • Windows Server 2019 and the Software-Defined Datacenter
  • New Management Experiences for Infrastructure with Windows Admin Center
  • Hosting an Enterprise Grade Cloud in your datacenter with Azure Stack
  • Taking your first steps into the public cloud with Azure IaaS

Journey to the Clouds

Save your seat