Tag Archives: administrators

Lessons learned from PowerShell Summit 2019

Most Windows administrators have at least dabbled with PowerShell to get started on their automation journey, but for more advanced practices, it helps to attend a conference, such as the PowerShell Summit.

For the second straight year, I attended the PowerShell + DevOps Global Summit in Bellevue, Wash., earlier this year. As an avid PowerShell user and evangelist, I greatly enjoy being around fellow community members to talk shop about how we use PowerShell in our jobs as well as catch the deep dive sessions.

This year was a bit different for me as I also presented a session, “Completely Automate Managing Windows Software … Forever,” to explain how I use Chocolatey with PowerShell to automate the deployment of third-party software in my full-time position.

There’s a lot of value in the sessions at PowerShell Summit. Administrators and developers who rely on PowerShell get a chance to learn something new and build their skill set. The videos from the sessions are on YouTube, but if you don’t attend in person you will miss out on the impromptu hallway discussions. These gatherings can be a great way to meet a lot of veteran IT professionals, community leads and even PowerShell team members. Jeffrey Snover, the inventor of PowerShell, was also in attendance.

In this article, I will cover my experiences at this year’s PowerShell Summit and some of the lessons learned during the week.

AWS Serverless Computing

Serverless computing is a very hot topic for many organizations that want to cut costs and reduce the work it takes to support a Windows deployment. With serverless computing, there is no need to manage a Windows Server machine and all its requisite setup and maintenance work. You can use an API to run PowerShell, and it will scale automatically.

I have not tried any sort of serverless computing, but it didn’t take much to see its potential and advantages during the demonstration.

Andrew Pearce, a senior systems development engineer at AWS, presented a session entitled “Unleash Your PowerShell With AWS Lambda and Serverless Computing.” Pearce’s talk covered how to use Amazon’s event-drive computing with PowerShell Core.

I have not tried any sort of serverless computing, but it didn’t take much to see its potential and advantages during the demonstration. Pearce explained that a PowerShell function can run from an event, such as when an image is placed in an AWS Simple Storage Service bucket, to convert the image to multiple resolutions depending on the organization’s need. Another possibility is to run a PowerShell function in response to an IoT event, such as someone ringing a doorbell.

Simple Hierarchy in PowerShell

Simple Hierarchy in PowerShell (SHiPS) is an area in PowerShell that looks interesting, but one that I have not had much experience with. The concepts of SHiPs is similar to traversing a file system in PowerShell; a SHiPs provider can create a provider that looks like a hierarchical file system.

Providers have been a part of PowerShell since version 1.0 and give access to data and components not easily reached from the command line, such as the Windows certificate store. One common example is the PowerCLI data store provider to let users to access a data store in vSphere.

You can see what providers you have on a system with the Get-PSProvider command.

PowerShell providers
The Get-PSProvider cmdlet lists the providers available on the system.

Another familiar use of a PowerShell provider is the Windows registry, which PowerShell can navigate like a traditional file system.

Glenn Sarti of Puppet gave a session entitled “How to Become a SHiPS Wright – Building With SHiPS.” Attempting to write your own provider from scratch is a complex undertaking that ordinarily requires advanced programming skill and familiarity with the PowerShell software development kit. SHiPs attempts to bypass this complexity and make provider development easier by writing provider code in PowerShell. Sarti explained that SHiPs reduces the amount of code to write a module from thousands of lines to less than 20 in some instances.

In his session, Sarti showed how to use SHiPs to create an example provider using the PowerShell Summit agenda and speakers as data. After watching this session, it sparked an idea to create a provider for a Chocolatey repository as if it were a file system.

PowerShell Remoting Internals

In this deep dive session, Paul Higinbotham, a member of the PowerShell team, covered how PowerShell remoting works. Higinbotham explained PowerShell’s five different transports to remoting, or protocols, it can use to run remote commands on another system.

In Windows, the most popular is WinRM since PowerShell is most often used on Windows. For PowerShell Core, OpenSSH is the best option for cross-platform use since both Windows and Linux. The advantage here is you can run PowerShell Core scripts from Windows to Linux and vice versa. Since I work in a mostly Linux environment, using Secure Shell (SSH) and PowerShell together makes a lot of sense.

This session taught me about the existence of interprocess communication remoting in PowerShell Core. This is accomplished with the command Enter-PSHostProcess and the main perk being the ability to debug scripts on a remote system from your local machine.

Toward the end of the presentation, Higinbotham shared a great diagram of the remoting architecture and goes over what each component does during a remoting session.

Go to Original Article
Author:

How to automate patch management in Windows

Patch Tuesday comes every month like clockwork for IT administrators. For routine jobs such as patch management in Windows, it’s essential to use automation to make this chore more tolerable.

There are many products to help you deploy Windows patches to your systems, but they’re usually expensive. If you can’t afford these offerings, another option is to roll your own automated Windows patching routine. Using PowerShell, IT teams can test, deploy and verify Windows updates across hundreds of machines using nothing but some PowerShell kung fu and some prebuilt modules.

Prerequisites for automated patch management in Windows

To follow along, you should have the following prerequisites set up:

  • Windows PowerShell 5.1 on a client;
  • PowerShell Remoting available on the remote systems to patch;
  • logged in or have access to an account with local administrator permissions on the remote systems;
  • an Active Directory environment;
  • the Active Directory PowerShell module on your client; and
  • a Windows Server Update Services (WSUS) server installed and set up to manage your systems.

Set up a test environment

As most administrators know, you never push out patches directly to your production systems, which means you need to set up a test environment. You should configure this with a sampling of the operating systems and configurations of all systems that receive patches.

To determine what you have in your inventory, use the following script. It queries all Active Directory computers in the domain and groups them by the operating system.

$computerCount = 2
$properties = @( 
    @{Name='OperatingSystem';Expression={$_.Name}},
    @{Name='TotalCount';Expression={$_.Group.Count}},
    @{Name='TestComputers';Expression={$_.Group | Select-Object -ExpandProperty Name -First $computerCount }} 
) 
$testGroups = Get-ADComputer -Filter * -Properties OperatingSystem | Group-Object -Property OperatingSystem | Select-Object -Property $properties $testGroups

When the script runs, it groups the output by the type of machines and how many there are.

OperatingSystem                TotalCount TestComputers
---------------                ---------- -------------
Windows Server 2016 Datacenter          3 {SRDC01, SCRIPTRUNNER01}

Now that you know what operating systems you have, you can either convert the physical machines to virtual ones or perhaps build new virtual machines in your hypervisor of choice. You can do that with PowerShell, but that is outside of the scope of this article. This tutorial will continue on the assumption you executed the conversion and are ready to proceed.

Deploying Windows updates

Once you’ve got your test VMs set up, check to see if there are new patches available. You will need to use the WSUS server to find this information.

When you’ve discovered the Microsoft Knowledge Base (KB) IDs of all patches to test, you can deploy these patches using the PSWindowsUpdate module. To download and install, use this command:

Install-Module PSWindowsUpdate

Next, deploy the patches, but first, you’ll need to ensure you’ve got the appropriate firewall port exceptions for the Windows Firewall enabled. Here’s a quick PowerShell command to enable it on remote systems:

New-NetFirewallRule -DisplayName "Allow PSWindowsUpdate" -Direction Inbound -Program "%SystemRoot%System32dllhost.exe" -RemoteAddress Any -Action Allow -LocalPort 'RPC' -Protocol TCP

Next, you can run a quick test by running a simple command such as Get-WUHistory to see if it returns an error or if it returns a patch history. If it’s the latter, then you can proceed.

Now it’s time to deploy the Windows patches to the test groups. For this tutorial, deploy KB ID KB4091664. Start by copying the PSWindowsUpdate module to the remote computers, and then initiate the install. Also, schedule a reboot during the maintenance window. In this instance, that’s a time two hours from now.

foreach ($computer in $testGroups.TestComputers) {
    Copy-Item -Path "$Env:PROGRAMFILESWindowsPowerShellModulesPSWindowsUpdate" -Destination "\$computerc$Program FilesWindowsPowerShellModules" -Recurse
    Install-WindowsUpdate -ComputerName $computer -KBArticleIds 'KB4091664' -Schedule (Get-Date).AddHours(2)
}

X ComputerName Result     KB          Size Title
- ------------ ------     --          ---- -----
1 scriptrun... Accepted   KB4091664    1MB 2018-09 Update for Windows Server 2016 for x64-based Systems (KB4091664)

This script starts the patch installation on each computer. To monitor the progress, you can use the Get-WUHistory command.

Get-WUHistory -ComputerName scriptrunner01 -Last 1
ComputerName Operationname  Result     Date                Title
------------ -------------  ------     ----                -----
scriptrun... Installation   InProgress 4/4/2019 9:31:21 PM 2018-09 Update for Windows Server 2016 for x64-based Systems (KB4091664)

Dive deeper into the PSWindowsUpdate module

This article just covers the basics of rolling out an automated way to handle patch management in Windows with PowerShell. PSWindowsUpdate is a great time-saver with extensive functionality. It’s worth investigating the help in this PowerShell module to see how you can customize it based on your needs.

Go to Original Article
Author:

How Windows Admin Center stacks up to other management tools

Microsoft took a lot of administrators by surprise when it released Windows Admin Center, a new GUI-based management tool, last year. But is it mature enough to replace third-party offerings that handle some of the same tasks?

Windows Admin Center is a web-based management environment for Windows Server 2012 and up that exposes roughly 50% of the capabilities of the traditional Microsoft Management Console-based GUI environment. Most common services — DNS, Dynamic Host Configuration Protocol, Event Viewer, file sharing and even Hyper-V — are available within the Windows Admin Center, which can be installed on a workstation with a self-hosted web server built in, or on a traditional Windows Server machine using IIS.

It also covers several Azure management scenarios, as well, including managing Azure virtual machines when you link your cloud subscription to the Windows Admin Center instance you use.

Windows Admin Center dashboard
Among its many features, the Windows Admin Center dashboard provides an overview of the selected Windows machine, including the current state of the CPU and memory.

There are a number of draws for Windows Admin Center. It’s free and designed to be developed out of band, or shipped as a web download, rather than included in the Windows Server product. So, Microsoft can update it more frequently than the core OS.

Microsoft said, over time, most of the Windows administrative GUI tools will move to Windows Admin Center. It makes sense to spin up an instance of it on a management workstation, an old server or even a lightweight VM on your virtualization infrastructure. Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

How does Windows Admin Center compare with similar products on the market? Here’s a look at the pros and cons of each.

Goverlan Reach

Goverlan Reach is a remote systems management and administration suite for remote administration of virtually any aspect of a Windows system that is configurable via Windows Management Instrumentation. Goverlan is a fat client, normal Windows application, not a web app, so it runs on a regular workstation. Goverlan provides one-stop shopping for Windows administration in a reasonably well-laid-out interface. There is no Azure support.

For the extra money, you get a decent engine that allows you to automate certain IT processes and create a runbook of typical actions you would take on a system. You also get built-in session capturing and control without needing to connect to each desktop separately, as well as more visibility into software updates and patch management for not only Windows, but also major third-party software such as Chrome, Firefox and Adobe Reader.

Goverlan Reach has three editions. The Standard version is $29 per month and offers remote control functions. The Professional version costs $69 per month and includes Active Directory management and software deployment. The Enterprise version with all the advanced features costs $129 per month and includes compliance and more advanced automation abilities.

Editor’s note: Goverlan paid the writer to develop content marketing materials for its product in 2012 and 2013, but there is no ongoing relationship.

PRTG Network Monitor

Paessler’s PRTG Network Monitor tracks the uptime, health, disk space, and performance of servers and devices on your network, so you proactively respond to issues and prevent downtime.

[embedded content]
Managing Windows Server 2019 with Windows Admin Center.

PRTG monitors mail servers, web servers, database servers, file servers and others. It has sensors built in for the attendant protocols of each kind of server. You can build your own sensors to monitor key aspects of homegrown applications. PRTG logs all this monitoring information for analysis to build a baseline performance profile to develop ways to improve stability and performance on your network.

When looking at how PRTG stacks up against Windows Admin Center, it’s only really comparable from a monitoring perspective. The Network Monitor product offers little from a configuration standpoint. While you could install the alerting software and associated agents on Azure virtual machines in the cloud, there’s no real native cloud support; it treats the cloud virtual machines simply as another endpoint. 

It’s also a paid-for product, starting at $1,600 for 500 sensors and going all the way up to $60,000 for unlimited sensors. It does offer value and is perhaps the best monitoring suite out there from an ease-of-use standpoint, but most shops would most likely choose it in addition to Windows Admin Center, not in lieu of it.

SolarWinds

Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

SolarWinds has quite a few products under its systems management umbrella, including server and application monitoring; virtualization administration; storage resource monitoring; configuration and performance monitoring; log analysis; access right auditing; and up/down monitoring for networks, servers and applications. While there is some ability to administer various portions of Windows, with the Access Rights Manager or Virtualization Manager, these SolarWinds products are very heavily tilted toward monitoring, not administration.

The SolarWinds modules all start with list prices anywhere from $1,500 to $3,500, so you quickly start incurring a substantial expense to purchase the modules needed to administer all the different detailed areas of your Windows infrastructure. While these products are surely more full-featured than Windows Admin Center, the delta might not be worth $3,000 to your organization. For my money, PRTG becomes a better value for the money if monitoring is your goal.

Nagios

Nagios has a suite of tools to monitor infrastructure, from individual systems to protocols and applications, along with database monitoring, log monitoring and, perhaps important in today’s cloud world, bandwidth monitoring.

Nagios has long been available as an open source tool that’s very powerful, and the free version, Nagios Core, certainly has a place in any moderately complex infrastructure. The commercial versions of Nagios XI — $1,995 for standard and $3,495 for enterprise — have lots of shine and polish, but lack any sort of interface to administer systems.

The price is right, but its features still lag behind

There is clearly a place for Windows Admin Center in every Windows installation, given it is free, very functional although there are some bugs that will get worked out over time — and gives you a vendor-supported way of both monitoring and administering Windows.

However, Windows Admin Center lacks quite a bit of monitoring prowess and also doesn’t address all potential areas of Windows administration. There is no clear-cut winner out of all the profiled tools in this article. If anything, Windows Admin Center should be thought of as an additional tool to use in conjunction with some of these other products.

Go to Original Article
Author:

Check Office 365 usage reports for user adoption insights

Administrators who move from Exchange Server to Exchange Online — or the full Office 365 suite — must learn new tools to manage these cloud services.

Many Exchange administrators spend a good amount of time managing and maintaining the messaging system, but very few monitor the overall usage of email and its components. They may inadvertently ignore low adoption rates and other issues experienced by users. Microsoft helps administrators generate a multitude of Office 365 usage reports to review and share with their teams to find ways to improve usage. These reports give the IT team access to information to address any questions or concerns around their security and end users’ adoption to their managers or business leaders.

For those who might have opted to move to Exchange Online or the Office 365 suite, one of the benefits of having their email stored in Microsoft’s cloud is the other services included with a subscription. A company that switches its email to Office 365 can also benefit from other cloud services, such as OneDrive, Skype for Business, Teams, SharePoint and Forms. One specific perk related to Exchange Online is administrators get visibility with usage reporting that was almost nonexistent in Exchange Server.

Office 365 usage reports bring clarity

When an organization moves its mail to an online host, many managers and leaders will want to know if their teams have adopted the new services and at what capacity. Usage reporting helps an organization determine the value of the switch to the cloud and provides insights such as:

  • what users need help with if they have a low adoption rate compared to other services;
  • the volume of email and interactions by users to see who are the biggest consumers of those workloads;
  • statistics on compliance to confirm users observe company policies; and
  • statistics on app usability across different platforms.

Interactive Microsoft Office 365 usage analytics in Power BI

One specific perk related to Exchange Online is administrators get visibility with usage reporting that was almost nonexistent in Exchange Server.

This report is available via a content pack in Power BI, which is Microsoft’s data visualization and reporting platform. This feature connects directly to the cloud offering’s back end, extracts the pertinent data and generates Office 365 usage reports in the form of interactive dashboards that are separated by workload, including Exchange Online, SharePoint Online, Yammer and OneDrive.

Administrators do not need to be data experts or have extensive experience with report generation. The content pack simplifies the data collection work by offering several report templates, including a product usage report for a detailed look at the user activity in each service and a communication report, which pinpoints a user’s favorite service to stay in contact with.

Administrators can access the reports from mobile devices or the Power BI portal to share them with business leaders to highlight the overall adoption rate of the different Office 365 workloads.

Email activity reveals user patterns

Another standard report accessible from the Office 365 admin portal is the email activity report.

Exchange email activity report
Administrators can track the trends related to email on Exchange Online.

This page resides under the Reports section and shows the email volume summaries of each Exchange Online licensed user over several periods: seven, 30, 90 or 180 days. These activity reports show the amount of email read, sent and received over a particular period. If a user is receiving a lot of email but is not responding or reading much of it, it can indicate a need for additional training.

This report includes the option to export the results in a comma separated file to import into a spreadsheet for further inspection.

MyAnalytics reporting for individual email behaviors

For users who want an analysis of their work habits related to email and time management, Microsoft offers a feature called MyAnalytics, which reports on their patterns with Exchange Online. MyAnalytics measures time spent checking email throughout the day and how much focus time a user has. MyAnalytics offers suggestions on ways employees can be more efficient and productive with their time on the job to avoid burnout.

MyAnalytics reports are accessible from Office 365 Delve or can be delivered via email weekly.

Security & Compliance Center reports

One area where an Exchange administrator can never rest is security, considering most data breaches occur through a phishing attempt.

Exchange Online admins can see the threats targeting their users through the Security & Compliance Center. The reports identify the different types of attacks and potential data leaks affecting the organization. Some reports include details on the campaigns that target the leaders of the organization, spoof detection, spam detection as well as possible data leakage when sensitive information is sent outside the network. To access the dashboards, go to the Reports > Dashboard section in the Security & Compliance Center.

Administrators with some programming skills and more advanced needs can generate customized reports with the Graph API to pull raw data for more detailed information. For example, administrators can pull out granular statistics from Office 365 about the different ways users are retrieving email from Outlook, such as a Windows browser or the Android app.

Go to Original Article
Author:

How to deal with the on-premises vs. cloud challenge

For some administrators, the cloud is not a novelty. It’s critical to their organization. Then, there’s you, the lone on-premises holdout.

With all the hype about cloud and Microsoft’s strong push to get IT to use Azure for services and workloads, it might seem like you are the only one in favor of remaining in the data center in the great on-premises vs. cloud debate. The truth is the cloud isn’t meant for everything. While it’s difficult to find a workload not supported by the cloud, that doesn’t mean everything needs to move there.

Few people like change, and a move to the cloud is a big adjustment. You can’t stop your primary vendors from switching their allegiance to the cloud, so you will need to be flexible to face this new reality. Take a look around at your options as more vendors narrow their focus away from the data center and on-premises management.

Is the cloud a good fit for your organization?

The question is: Should it be done? All too often, it’s a matter of money. For example, it’s possible to take a large-capacity file server in the hundreds of terabytes and place it in Azure. Microsoft’s cloud can easily support this workload, but can your wallet?

Once you get over the sticker shock, think about it. If you’re storing frequently used data, it might make business sense to put that file server in Azure. However, if this is a traditional file server with mostly stale data, then is it really worth the price tag as opposed to using on-premises hardware?

Azure file server
When you run the numbers on what it takes to put a file server in Azure, the costs can add up.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks. Part of the calculation in determining what makes sense in an operational budget structure, as opposed to a capital expense, is the people factor. Too often, admins find themselves in a situation where management sees one side of this formula and wants to make that cloud leap, while the admins must look at the reality and explain both the pros and cons — the latter of which no one wants to hear.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks.

The cloud question also goes deeper than the Capex vs. Opex argument for the admins. With so much focus on the cloud, what happens to those environments that simply don’t or can’t move? It’s not only a question of what this means today, but also what’s in store for them tomorrow.

As vendors move on, the walls close in

With the focus for most software vendors on cloud and cloud-related technology, the move away from the data center should be a warning sign for admins that can’t move to the cloud. The applications and tools you use will change to focus on the organizations working in the cloud with less development on features that would benefit the on-premises data center.

One of the most critical aspects of this shift will be your monitoring tools. As cloud gains prominence, it will get harder to find tools that will continue to support local Windows Server installations over cloud-based ones. We already see this trend with log aggregation tools that used to be available as on-site installs that are now almost all SaaS-based offerings. This is just the start.

If a tool moves from on premises to the cloud but retains the ability to monitor data center resources, that is an important distinction to remember. That means you might have a workable option to keep production workloads on the ground and work with the cloud as needed or as your tools make that transition.

As time goes on, an evaluation process might be in order. If your familiar tools are moving to the cloud without support for on-premises workloads, the options might be limited. Should you pick up new tools and then invest the time to install and train the staff how to use them? It can be done, but do you really want to?

While not ideal, another viable option is to take no action; the install you have works, and as long as you don’t upgrade, everything will be fine. The problem with remaining static is getting left behind. The base OSes will change, and the applications will get updated. But, if your tools can no longer monitor them, what good are they? You also introduce a significant security risk when you don’t update software. Staying put isn’t a good long-term strategy.

With the cloud migration will come other choices

The same challenges you face with your tools also apply to your traditional on-premises applications. Longtime stalwarts, such as Exchange Server, still offer a local installation, but it’s clear that Microsoft’s focus for messaging and collaboration is its Office 365 suite.

The harsh reality is more software vendors will continue on the cloud path, which they see as the new profit centers. Offerings for on-premises applications will continue to dwindle. However, there is some hope. As the larger vendors move to the cloud, it opens up an opportunity in the market for third-party tools and applications that might not have been on your radar until now. These products might not be as feature-rich as an offering from the larger vendors, but they might tick most of the checkboxes for your requirements.

Go to Original Article
Author:

Explore the Cubic congestion control provider for Windows

Administrators may not be familiar with the Cubic congestion control provider, but Microsoft’s move to make this the default setting in the Windows networking stack means IT will need to learn how it works and how to manage it.

When Microsoft released Windows Server version 1709 in its Semi-Annual Channel, the company introduced a number of features, such as support for data deduplication in the Resilient File System and support for virtual network encryption.

Microsoft also made the Cubic algorithm the default congestion control provider for that version of Windows Server. The most recent preview builds of Windows 10 and Windows Server 2019 (Long-Term Servicing Channel) also enable Cubic by default.

Microsoft added Cubic to Windows Server 2016, as well, but it calls this implementation an experimental feature. Due to this disclaimer, administrators should learn how to manage Cubic if unexpected behavior occurs.

Why Cubic matters in today’s data centers

Congestion control mechanisms improve performance by monitoring packet loss and latency and making adjustments accordingly. TCP/IP limits the size of the congestion window and then gradually increases the window size over time. This process stops when the maximum receive window size is reached or packet loss occurs. However, this method hasn’t aged well with the advent of high-bandwidth networks.

For the last several years, Windows has used Compound TCP as its standard congestion control provider. Compound TCP increases the size of the receive window and the volume of data sent.

Cubic, which has been the default congestion provider for Linux since 2006, is a protocol that improves traffic flow by keeping track of congestion events and dynamically adjusting the congestion window.

A Microsoft blog on the networking features in Windows Server 2019 said Cubic performs better over a high-speed, long-distance network because it accelerates to optimal speed more quickly than Compound TCP.

Enable and disable Cubic with netsh commands

Microsoft added Cubic to later builds of Windows Server 2016. You can use the following PowerShell command to see if Cubic is in your build:

Get-NetTCPSetting| Select-Object SettingName, CcongestionProvider

Technically, Cubic is a TCP/IP add-on. Because PowerShell does not support Cubic yet, admins must enable it in Windows Server 2016 from the command line with the netsh command from an elevated command prompt.

Netsh uses the concepts of contexts and subcontexts to configure many aspects of Windows Server’s networking stack. A context is similar to a mode. For example, the netsh firewall command places netsh in a firewall context, which means that the utility will accept firewall-related commands.

Microsoft added Cubic-related functionality into the netsh interface context. The interface context — abbreviated as INT in some Microsoft documentation — provides commands to manage the TCP/IP protocol.

Prior to Windows Server 2012, admins could make global changes to the TCP/IP stack by referencing the desired setting directly. For example, if an administrator wanted to use the Compound TCP congestion control provider — which was the congestion control provider since Windows Vista and Windows Server 2008 — they could use the following command:

netsh int tcp set global congestionprovider=ctcp

Newer versions of Windows Server use netsh and the interface context, but Microsoft made some syntax changes in Windows Server 2012 that carried over to Windows Server 2016. Rather than setting values directly, Windows Server 2012 and Windows Server 2016 use supplemental templates.

In this example, we enable Cubic in Windows Server 2016:

netsh int tcp set supplemental template=internet congestionprovider=cubic

This command launches netsh, switches to the interface context, loads the Internet CongestionProvider template and sets the congestion control provider to Cubic. Similarly, we can switch from the Cubic provider to the default Compound congestion provider with the following command:

netsh int tcp set supplemental template=internet congestionprovider=compound

Learn the tricks for using Microsoft Teams with Exchange

Using Microsoft Teams means Exchange administrators need to understand how this emerging collaboration service connects to the Exchange Online and Exchange on-premises systems.

At its 2017 Ignite conference, Microsoft unveiled its intelligent communications plan, which mapped out the movement of features from Skype for Business to Microsoft Teams, the Office 365 team collaboration service launched in March 2017. Since that September 2017 conference, Microsoft has added meetings and calling features to Teams, while also enhancing the product’s overall functionality.

Organizations that run Exchange need to understand how Microsoft Teams relies on Office 365 Groups, as well as the setup considerations Exchange administrators need to know.

How Microsoft Teams depends on Office 365 Groups

Each team in Microsoft Teams depends on the functionality provided by Office 365 Groups, such as shared mailboxes or SharePoint Online team sites. An organization can permit all users to create a team and Office 365 Group, or it can limit this ability by group membership. 

When creating a new team, it can be linked to an existing Office 365 Group; otherwise, a new group will be created.

Microsoft Teams layout
Microsoft Teams is Microsoft’s foray into the team collaboration space. Using Microsoft Teams with Exchange will require administrators to stay abreast of roadmap plans for proper configuration and utilization of the collaboration offering.

Microsoft adjusted settings recently so new Office 365 Groups created by Microsoft Teams do not appear in Outlook by default. If administrators want new groups to show in Outlook, they can use the Set-UnifiedGroup PowerShell command.

Microsoft Teams’ reliance on Office 365 Groups affects organizations that run an Exchange hybrid configuration. In this scenario, the Azure AD Connect group writeback feature can be enabled to synchronize Office 365 Groups to Exchange on premises as distribution groups. But this setting could lead to the creation of many Office 365 Groups created via Microsoft Teams that will appear in Exchange on premises. Administrators will need to watch this to see if the configuration will need to be adjusted.

Using Microsoft Teams with Exchange Online vs. Exchange on premises

As an Exchange Online customer, subscribers also get access to all the Microsoft Teams features. However, if the organization uses Exchange on premises, then certain functionality, such as the ability to modify user profile pictures and add connectors, is not available.

Microsoft Teams’ reliance on Office 365 Groups affects organizations that run an Exchange hybrid configuration.

Without connectors, users cannot plug third-party systems into Microsoft Teams; certain add-ins, like the Twitter connector that delivers tweets into a Microsoft Teams channel, cannot be used. Additionally, organizations that use Microsoft Teams with Exchange on-premises mailboxes must run on Exchange 2016 cumulative update 3 or higher to create and view meetings in Microsoft Teams.

Message hygiene services and Microsoft Teams

Antispam technology might need to be adjusted due to some Microsoft Teams and Exchange integration issues.

When a new member joins a team, the email.teams.microsoft.com domain sends an email to the new member. Microsoft owns this domain name, which the tenant administrator cannot adjust.

Because the domain is considered an external email domain to the organization’s Exchange Online deployment, the organization’s antispam configuration in Exchange Online Protection may mark the notification email as spam. Consequently, the new member might not receive the email or may not see it if it goes into the junk email folder.

To prevent this situation, Microsoft recommends adding email.teams.microsoft.com to the allowed domains list in Exchange Online Protection.

Complications with security and compliance tools

Administrators need to understand the security and compliance functionality when using Microsoft Teams with Exchange Online or Exchange on premises. Office 365 copies team channel conversations in the Office 365 Groups shared mailbox in Exchange Online so its security and compliance tools, such as eDiscovery, can examine the content. However, Office 365 stores copies of chat conversations in the users’ Exchange Online mailboxes, not the shared mailbox in Office 365 Groups.

Historically, Office 365 security and compliance tools could not access conversation content in an Exchange on-premises mailbox in a hybrid environment. Microsoft made changes to support this scenario, but customers must request this feature via Microsoft support.

Configure Exchange to send email to Microsoft Teams

An organization might want its users to have the ability to send email messages from Exchange Online or Exchange on premises to channels in Microsoft Teams. To send an email message to a channel, users need the channel’s email address and permission from the administrator. A right-click on a channel reveals the Get email address option. All the channels have a unique email address.

Administrators can restrict the domains permitted to send email to a channel in the Teams administrator settings in the new Microsoft Teams and Skype for Business admin center.

VMware HCX makes hybrid, multi-cloud more attainable

LAS VEGAS — VMware HCX attempts to drive migration to hybrid and multi-cloud architectures, but IT administrators are still hesitant to make the switch due to concerns around cost and complexity.

Before doing product evaluations and determining if VMware Hybrid Cloud Extension (HCX) is a good option for workload migration, admins must figure out if the cloud meets their current and future business needs. What is the organization trying to accomplish with its existing deployments?

For example, consider a near-end-of-support vSphere 5.5 environment: Is the goal to seamlessly migrate those workloads from the current environment to the cloud without an on-premises upgrade? Or, is successfully migrating hundreds of VMs or large amounts of storage the objective?

Determining the ultimate goal and whether a private cloud, hybrid cloud, public cloud or multi-cloud makes the most sense is a decision that admins must make on a case-by-case basis.

Cloud cost and complexity concerns

The ongoing fee associated with using cloud services is just one of the cost concerns, experts said in a session here at VMworld 2018. During the migration, admins have to worry about whether they’ll need to change IPs, the potential of running into compatibility issues, and the responsibility of ensuring business continuity and disaster recovery.

“Even after we meet all their requirements, we’ve seen in any organization all kinds of inertia about getting going,” said Allwyn Sequeira, senior vice president and general manager of hybrid cloud services at VMware. “People think they need to go buy high-bandwidth pipes to connect from on-prem to the cloud. People think they need to do an assessment of applications to see if this is an app that should be moved to the cloud.”

App dependencies and mapping are certainly important issues to consider. With more VMs, the environment is more complex; it’s easier to break something during migration.

Even when a certain vendor or product addresses their concerns, admins need buy-in from networking, security, compliance and governance teams before moving forward with the cloud.

The introduction of VMware HCX is the vendor’s attempt to remove some of the roadblocks keeping organizations from adopting hybrid and multi-cloud environments.

What is VMware HCX, and what are its use cases?

VMware HCX, also known as NSX Hybrid Connect, is a platform that enables admins to migrate VMs and applications between vSphere infrastructures with at least version 5.0 and newer and from on-premises environments to the cloud.

The top use cases of VMware HCX include consolidating and modernizing the data center, extending the data center to the cloud, and disaster recovery.

“HCX gives you freedom of choice,” said Nathan Thaler, director of cloud platforms at MIT in Cambridge, Mass. “You can move your workload into a cloud provider as long it works for you, and then you can move it out without any lock-in. We’ve moved certain VMs between multiple states and without any network downtime.”

Thaler did caution organizations to avoid using virtual hardware beyond the highest level of compatibility with the oldest cloud environment.

Disaster recovery to the cloud, while maybe not as front of mind as other popular use cases, is key in the event of a natural disaster.

“We wanted to be able to have resiliency whether it’s an East Coast event or a West Coast event,” said HCX customer Gary Goforth, senior systems engineer at ConnectWise Inc., a business management software provider based in Tampa, Fla.

VMware HCX-supported features include Encrypted vMotion, vSphere Replication and scheduled migrations. The functionality itself seems to be what admins are really looking for.

“We wanted a fairly simple, easy way to implement a cloud,” Goforth said. “We wanted to do it with minimal to no downtime and to handle a bulk migration of our virtual machines.”

In terms of the VMware HCX roadmap, the vendor is working on constructs to move workloads across different clouds, Sequeira said.

“It’s all about interconnecting data centers to each other,” he said. “Ultimately, at the end of the day, where you run is going to become less important than what services you need.”

Plan your Exchange migration to Office 365 with confidence

Introduction

Choosing an Exchange migration to Office 365 is just the beginning of this process for administrators. Migrating all the content, troubleshooting the issues and then getting the settings just right in a new system can be overwhelming, especially with tricky legacy archives.

Even though it might appear that the Exchange migration to Office 365 is happening everywhere, transitioning to the cloud is not a black and white choice for every organization. On-premises servers still get the job done; however, Exchange Online offers a constant flow of new features and costs less in some cases. Administrators should also consider a hybrid deployment to get the benefits of both platforms.

Once you have determined the right configuration, you will have to choose how to transfer archived emails and public folders and which tools to use. Beyond relocating mailboxes, administrators have to keep content accessible and security a priority during an Exchange migration to Office 365.

This guide simplifies the decision-making process and steers administrators away from common issues. More advanced tutorials share the reasons to keep certain data on premises and the tricks to set up the cloud service for optimal results.

1Before the move

Plan your Exchange migration

Prepare for your move from Exchange Server to the cloud by understanding your deployment options and tools to smooth out any bumps in the road.

2After the move

Working with Exchange Online

After you’ve made the switch to Office 365’s hosted email platform, these tools and practices will have your organization taking advantage of the new platform’s perks without delay.

3Glossary

Definitions related to Exchange Server migration

Understand the terms related to moving Exchange mailboxes.

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.