Tag Archives: administrators

What admins need to know about Azure Stack HCI

Despite all the promise of cloud computing, it remains out of reach for administrators who cannot, for different reasons, migrate out of the data center.

Many organizations still grapple with concerns, such as compliance and security, that weigh down any aspirations to move workloads from on-premises environments. For these organizations, hyper-converged infrastructure (HCI) products have stepped in to approximate some of the perks of the cloud, including scalability and high availability. In early 2019, Microsoft stepped into this market with Azure Stack HCI. While it was a new name, it was not an entirely new concept for the company.

Some might see Azure Stack HCI as a mere rebranding of the existing Windows Server Software-Defined (WSSD) program, but there are some key differences that warrant further investigation from shops that might benefit from a system that integrates with the latest software-defined features in the Windows Server OS.

What distinguishes Azure Stack HCI from Azure Stack?

When Microsoft introduced its Azure Stack HCI program in March 2019, there was some initial confusion from many in IT. The company offered a similarly named product called Azure Stack, which uses the name of Microsoft’s cloud platform, to run a version of Azure inside the data center.

Microsoft developed Azure Stack HCI for local VM workloads that run on Windows Server 2019 Datacenter edition. While not explicitly tied to the Azure cloud, organizations that use Azure Stack HCI can connect to Azure for hybrid services, such as Azure Backup and Azure Site Recovery.

Azure Stack HCI offerings use OEM hardware from vendors such as Dell, Fujitsu, Hewlett Packard Enterprise and Lenovo that is validated by Microsoft to capably run the range of software-defined features in Windows Server 2019.

How is Azure Stack HCI different from the WSSD program?

While Azure Stack is essentially an on-premises version of the Microsoft cloud computing platform, its approximate namesake, Azure Stack HCI, is more closely related to the WSSD program that Microsoft launched in 2017.

Microsoft made its initial foray into the HCI space with its WSSD program, which utilized the software-defined features in the Windows Server 2016 Datacenter edition on hardware validated by Microsoft.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016.

Windows Server gives administrators the virtualization layers necessary to avoid the management and deployment issues related to proprietary hardware. Windows Server’s software-defined storage, networking and compute capabilities enable organizations to more efficiently pool the hardware resources and use centralized management to sidestep traditional operational drawbacks.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016. For example, Windows Server 2019 offers expanded pooled storage of 4 petabytes in Storage Spaces Direct, compared to 1 PB on Windows Server 2016. Microsoft also updated the clustering feature in Windows Server 2019 for improved workload resiliency and added data deduplication to give an average of 10 times more storage capacity than Windows Server 2016.

What are the deployment and management options?

The Azure Stack HCI product requires the use of the Windows Server 2019 Datacenter edition, which the organization might get from the hardware vendor for a lower cost than purchasing it separately.

To manage the Azure Stack HCI system, Microsoft recommends using Windows Admin Center, a relatively new GUI tool developed as the potential successor to Remote Server Administration Tools, Microsoft Management Console and Server Manager. Microsoft tailored Windows Admin Center for smaller deployments, such as Azure Stack HCI.

Windows Admin Center drive dashboard
The Windows Admin Center server management tool offers a dashboard to check on the drive performance for issues related to latency or when a drive fails.

Windows Admin Center encapsulates a number of traditional server management utilities for routine tasks, such as registry edits, but it also handles more advanced functions, such as the deployment and management of Azure services, including Azure Network Adapter for companies that want to set up encryption for data transmitted between offices.

Companies that purchase an Azure Stack HCI system get Windows Server 2019 for its virtualization technology that pools storage and compute resources from two nodes up to 16 nodes to run VMs on Hyper-V. Microsoft positions Azure Stack HCI as an ideal system for multiple scenarios, such as remote office/branch office and VDI, and for use with data-intensive applications, such as Microsoft SQL Server.

How much does it cost to use Azure Stack HCI?

The Microsoft Azure Stack HCI catalog features more than 150 models from 20 vendors. A general-purpose node will cost about $10,000, but the final price will vary depending on the level of customization the buyer wants.

There are multiple server configuration options that cover a range of processor models, storage types and networking. For example, some nodes have ports with 1 Gigabit Ethernet, 10 GbE, 25 GbE and 100 GbE, while other nodes support a combination of 25 GbE and 10 GbE ports. Appliances optimized for better performance that use all-flash storage will cost more than units with slower, traditional spinning disks.

On top of the price of the hardware is the annual maintenance and support fees that are typically a percentage of the purchase price of the appliance.

If a company opts to tap into the Azure cloud for certain services, such as Azure Monitor to assist with operational duties by analyzing data from applications to determine if a problem is about to occur, then additional fees will come into play. Organizations that remain fixed with on-premises use for their Azure Stack HCI system will avoid these extra costs.

Go to Original Article
Author:

Azure Bastion brings convenience, security to VM management

Administrators who want to manage virtual machines securely but want to avoid complicated jump server setup and maintenance have a new option at their disposal.

When you run Windows Server and Linux virtual machines in Azure, you need to configure administrative access. This requires communicating with these VMs from across the internet using Transmission Control Protocol (TCP) port 3389 for Remote Desktop Protocol (RDP), and TCP 22 for Secure Shell (SSH).

You want to avoid the configuration in Figure 1, which exposes your VMs to the internet with an Azure public IP address and invites trouble via port scan attacks. Microsoft publishes its public IPv4 data center ranges, so bad actors know which public IP addresses to check to find vulnerable management ports.

The problem with the network address translation (NAT)/load balancer method is your security team won’t like it. This technique is security by obfuscation, which is to say it does not make things more secure. It’s more of a NAT protocol hack.

port scan attacks
Figure 1. This setup exposes VMs to the internet with an Azure public IP address that makes an organization vulnerable to port scan attacks.

Another remote server management option offers illusion of security  

If you have a dedicated hybrid cloud setup with site-to-site virtual private network or an ExpressRoute circuit, then you can interact with your Azure VMs the same way you would with your on-premises workloads. But not every business has the money and staff to configure a hybrid cloud.

Another option, shown in Figure 2, combines the Azure public load balancer with NAT to route management traffic through the load balancer on nonstandard ports.

NAT rules
Figure 2. Using NAT and Azure load balancer for internet-based administrative VM access.

For instance, you could create separate NAT rules for inbound administrative access to the web tier VMs. If the load balancer public IP is 1.2.3.4, winserv1’s private IP is 192.168.1.10, and winserv2’s private IP is 192.168.1.11, then you could create two NAT rules that look like:

  • Inbound RDP connections to 1.2.3.4 on port TCP 33389 route to TCP 3389 on 192.168.1.10
  • Inbound RDP connections to 1.2.3.4 on port TCP 43389 route to TCP 3389 on 192.168.1.11

The problem with this method is your security team won’t like it. This technique is security by obfuscation that relies on a NAT protocol hack.

Jump servers are safer but have other issues

A third method that is quite common in the industry is to deploy a jump server VM to your target virtual network in Azure as shown in Figure 3.

jump server configuration
Figure 3. This diagram details a conventional jump server configuration for Azure administrative access.

The jump server is nothing more than a specially created VM that is usually exposed to the internet but has its inbound and outbound traffic restricted heavily with network security groups (NSGs). You allow your admins access to the jump server; once they log in, they can jump to any other VMs in the virtual network infrastructure for any management jobs.

Of these choices, the jump server is safest, but how many businesses have the expertise to pull this off securely? The team would need intermediate- to advanced-level skill in TCP/IP internetworking, NSG traffic rules, public and private IP addresses and Remote Desktop Services (RDS) Gateway to support multiple simultaneous connections.

For organizations that don’t have these skills, Microsoft now offers Azure Bastion.

What Azure Bastion does

Azure Bastion is a managed network virtual appliance that simplifies jump server deployment in your virtual networks.

Azure Bastion is a managed network virtual appliance that simplifies jump server deployment in your virtual networks. You drop an Azure Bastion host into its own subnet, perform some NSG configuration, and you are done.

Organizations that use Azure Bastion get the following benefits:

  • No more public IP addresses for VMs in Azure.
  • RDP/SSH firewall traversal. Azure Bastion tunnels the RDP and SSH traffic over a standard, non-VPN Transport Layer Security/Secure Sockets Layer connection.
  • Protection against port scan attacks on VMs.

How to set up Azure Bastion

Azure Bastion requires a virtual network in the same region. As of publication, Microsoft offers Azure Bastion in the following regions: Australia East, East U.S., Japan East, South Central U.S., West Europe and West U.S.

You also need an empty subnet named AzureBastionSubnet. Do not enable service endpoints, route tables or delegations on this special subnet. Further in this tutorial you can define or edit an NSG on each VM-associated subnet to customize traffic flow.

Because the Azure Bastion supports multiple simultaneous connections, size the AzureBastionSubnet subnet with at least a /27 IPv4 address space. One possible reason for this network address size is to give Azure Bastion room to auto scale in a method similar to the one used with autoscaling in Azure Application Gateway.

Next, browse to the Azure Bastion configuration screen and click Add to start the deployment.

Azure Bastion deployment setup
Figure 4: Deploying an Azure Bastion resource.

As you can see in Figure 4, the deployment process is straightforward if the virtual network and AzureBastionSubnet subnet are in place.

According to Microsoft, Azure Bastion will support client RDP and SSH clients in time, but for now you establish your management connection via the Connect experience in Azure portal. Navigate to a VM’s Overview blade, click Connect, and switch to the Bastion tab as shown Figure 5.

Azure Bastion setup
Figure 5. The Azure portal includes an Azure Bastion connection workflow.

On the Bastion tab, provide an administrator username and password, and then click Connect one more time. Your administrative RDP or SSH session opens in another browser tab, shown in Figure 6.

Windows Server management
Figure 6. Manage a Windows Server VM in Azure with Azure Bastion using an Azure portal-based RDP session.

You can share clipboard data between the Azure Bastion-hosted connection and your local system. Close the browser tab to end your administrative session.

Customize Azure Bastion

To configure Azure Bastion for your organization, create or customize an existing NSG to control traffic between the Azure Bastion subnet and your VM subnets.

Secure access to VMs with Azure Bastion.

Microsoft provides default NSG rules to allow traffic among subnets within your virtual network. For a more efficient and powerful option, upgrade your Azure Security Center license to Standard and onboard your VMs to just-in-time (JIT) VM access, which uses dynamic NSG rules to lock down VM management ports unless an administrator explicitly requests a connection.

You can combine JIT VM access with Azure Bastion, which results in this VM connection workflow:

  • Request access to the VM.
  • Upon approval, proceed to Azure Bastion to make the connection.

Azure Bastion needs some fine-tuning

Azure Bastion has a fixed hourly cost; Microsoft also charges for outbound data transfer after 5 GB.

Azure Bastion is an excellent way to secure administrative access to Azure VMs, but there are a few deal-breakers that Microsoft needs to address:

  1. You need to deploy an Azure Bastion host for each virtual network in your environments. If you have three virtual networks, then you need three Azure Bastion hosts, which can get expensive. Microsoft says virtual network peering support is on the product roadmap. Once Microsoft implements this feature, you can deploy a single Bastion host in your hub virtual network to manage VMs in peered spoke virtual networks.
  2. There is no support for PowerShell remoting ports, but Microsoft does support RDP, which goes against its refrain to avoid the GUI to manage servers.
  3. Microsoft’s documentation does not give enough architectural details to help administrators determine the capabilities of Azure Bastion, such as whether an existing RDP session Group Policy can be combined with Azure Bastion.

Go to Original Article
Author:

Microsoft closes IE zero-day on November Patch Tuesday

Administrators will need to focus on deploying fixes for an Internet Explorer zero-day and a Microsoft Excel bug as part of the November Patch Tuesday security updates.

Microsoft issued corrections for 75 vulnerabilities, 14 rated critical, in this month’s releases which also delivered fixes for Windows operating systems, Microsoft Office and Office 365 applications, Edge browser, Exchange Server, ChakraCore, Secure Boot, Visual Studio and Azure Stack.

In addition to these November Patch Tuesday updates, administrators should also look at the Google Chrome browser to fix a zero-day (CVE-2019-13720) reported by Kaspersky Labs researchers. Google corrected the flaw in build 78.0.3904.87 released on Oct. 31 for Windows, Mac and Linux systems.

Microsoft plugs Internet Explorer zero-day

The Internet Explorer zero-day (CVE-2019-1429), rated critical for Windows client systems and moderate for the server OS, covers the range of browsers from Internet Explorer 9 to 11. The flaw is a memory corruption vulnerability that could let an attacker execute code remotely on a system in the context of the current user. If that user is an administrator, then the attacker would gain full control of the system.

On a system run by a user with lower privileges, the attacker would need to do additional work through another exploit to elevate their privilege. Organizations that follow least privilege will be less susceptible to the exploit until administrators can roll out the update to Windows systems. Exposure to the zero-day can occur in several scenarios, from visiting a malicious website to opening an application or Microsoft Office document that contains the exploit.

“[There are] a few different ways to exploit [the IE zero-day], such as going to a site that allows user-contributed content like ads that can be injected with this type of malicious content to serve up the attack,” said Chris Goettl, director of product management and security at Ivanti, a security and IT management vendor based in South Jordan, Utah.

Chris Goettl, director of product management and security, IvantiChris Goettl

Organizations can take nontechnical measures, such as implement training that instructs users on how to avoid suspicious emails and websites, but the best way to prevent exploitation is to roll out the security update as quickly as possible because the vulnerability is under active attack, Goettl said.

Microsoft resolved a security feature bypass in Microsoft Excel 2016/2019 for macOS systems (CVE-2019-1457) rated important that had been publicly disclosed. The security update corrects a bug that did not enforce the macro settings for Excel documents. A user who opened a malicious Excel worksheet would trigger the exploit when it runs a macro. Microsoft’s advisory stipulated the preview pane is not an attack vector for this vulnerability.

Other security updates worth noting for November Patch Tuesday include:

  • A critical servicing update to ChakraCore to correct three memory corruption bugs (CVE-2019-1426, CVE-2019-1427 and CVE-2019-1428) that affect the Microsoft Edge browser in client and server operating systems. The remote code execution vulnerability could let an attacker run arbitrary code in the context of the current user to obtain the same user rights.
  • A remote code execution vulnerability in Exchange Server 2013/2016/2019 (CVE-2019-1373) that would let an attacker run arbitrary code. The exploit requires a user to run a PowerShell cmdlet. The update corrects how Exchange serializes its metadata.
  • A critical remote code execution vulnerability (CVE-2019-1419) in all supported Windows versions related to OpenType font parsing in the Windows Adobe Type Manager Library. An attacker could exploit the bug either by having a user open a malicious document or go to a website embedded with specially crafted OpenType fonts.
  • Microsoft resolved nine vulnerabilities affecting the Hyper-V virtualization platform. CVE-2019-0719, CVE-2019-0721, CVE-2019-1389, CVE-2019-1397 and CVE-2019-1398 relate to critical remote code execution bugs. CVE-2019-0712, CVE-2019-1309, CVE-2019-1310 and CVE-2019-1399 are denial-of-service flaws rated important.

Microsoft shares information on Trusted Platform Module bug

[There are] a few different ways to exploit [the IE zero-day], such as going to a site that allows user-contributed content like ads that can be injected with this type of malicious content to serve up the attack.
Chris GoettlDirector of product management and security, Ivanti

Microsoft also issued an advisory (ADV190024) for a vulnerability (CVE-2019-16863) in the Trusted Platform Module (TPM) firmware. The company indicated there is no patch because the flaw is not in the Windows OS or a Microsoft application, but rather in certain TPM chipsets. Microsoft said users should contact their TPM manufacturer for further information.
TPM chips stop unauthorized modifications to hardware and use cryptographic keys to detect tampering in firmware and the operating system.
“Other software or services you are running might use this algorithm. Therefore, if your system is affected and requires the installation of TPM firmware updates, you might need to reenroll in security services you are running to remediate those affected services,” the advisory said.
The flaw affects TPM firmware based on the Trusted Computing Guidelines specification family 2.0, according to Microsoft.

Microsoft releases more servicing stack updates

For the third month in a row, Microsoft released updates for the servicing stack for Windows client and server operating systems. Microsoft does not typically give a clear deadline when a servicing stack needs to be applied but has given as little as two months in some instances, Goettl said.

Servicing stack updates are not part of the cumulative updates for Windows but rather are installed separately.

Researchers say first BlueKeep exploit attempts underway

In security news beyond the November Patch Tuesday security updates, the first reports of the BlueKeep exploit targeting users began at the end of October when security researcher Kevin Beaumont spotted hacking attempts using the RDP flaw on his honeypots and reported the findings on his blog.

On May Patch Tuesday, Microsoft corrected the critical remote code execution flaw (CVE-2019-0708) dubbed BlueKeep that affects Windows 7 and Windows Server 2008/2008R2 systems. Due to the “wormable” nature of the vulnerability, many in IT felt BlueKeep might surpass the impact of the WannaCry outbreak. At one point there were more than a million public IPs running RDP that were vulnerable to a BlueKeep attack, which should serve as a wake-up call for IT to tighten up lax RDP practices, Goettl said.

“People should just be a little bit more intelligent about how they’re using RDP. You are opening a gateway into your network,” Goettl said. “There are people who have public-facing RDP that’s not behind a VPN, doesn’t require authentication. There are about four or five things people can do to better secure RDP services, especially when they’re exposing it to public IPs, but they’re just not doing it.”

Go to Original Article
Author:

Set up PowerShell script block logging for added security

PowerShell is an incredibly comprehensive and easy to use language. But administrators need to protect their organization from bad actors who use PowerShell for criminal purposes.

PowerShell’s extensive capabilities as a native tool in Windows make it tempting for an attacker to exploit the language. Increasingly, malicious software and bad actors are using PowerShell to either glue together different attack methods or run exploits entirely through PowerShell.

There are many methods and security best practices available to secure PowerShell, but one of the most valued is PowerShell script block logging. Script blocks are a collection of statements or expressions used as a single unit. Users denote them by everything inside the curly brackets within the PowerShell language.

Starting in Windows PowerShell v4.0 but significantly enhanced in Windows PowerShell v5.0, script block logging produces an audit trail of executed code. Windows PowerShell v5.0 introduced a logging engine that automatically decrypts code that has been obfuscated with methods such as XOR, Base64 and ROT13. PowerShell includes the original encrypted code for comparison.

PowerShell script block logging helps with the postmortem analysis of events to give additional insights if a breach occurs. It also helps IT be more proactive with monitoring for malicious events. For example, if you set up Event Subscriptions in Windows, you can send events of interest to a centralized server for a closer look.

Set up a Windows system for logging

Two primary ways to configure script block logging on a Windows system are by either setting a registry value directly or by specifying the appropriate settings in a group policy object.

To configure script block logging via the registry, use the following code while logged in as an administrator:

New-Item -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Force
Set-ItemProperty -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Name "EnableScriptBlockLogging" -Value 1 -Force

You can set PowerShell logging settings within group policy, either on the local machine or through organizationwide policies.

Open the Local Group Policy Editor and navigate to Computer Configuration > Administrative Templates > Windows Components > Windows PowerShell > Turn on PowerShell Script Block Logging.

Turning on PowerShell script block logging
Set up PowerShell script block logging from the Local Group Policy Editor in Windows.

When you enable script block logging, the editor unlocks an additional option to log events via “Log script block invocation start / stop events” when a command, script block, function or script starts and stops. This helps trace when an event happened, especially for long-running background scripts. This option generates a substantial amount of additional data in your logs.

PowerShell script block logging option
PowerShell script block logging tracks executed scripts and commands run on the command line.

How to configure script block logging on non-Windows systems

PowerShell Core is the cross-platform version of PowerShell for use on Windows, Linux and macOS. To use script block logging on PowerShell Core, you define the configuration in the powershell.config.json file in the $PSHome directory, which is unique to each PowerShell installation.

From a PowerShell session, navigate to $PSHome and use the Get-ChildItem command to see if the powershell.config.json file exists. If not, create the file with this command:

sudo touch powershell.config.json

Modify the file using a tool such as the nano text editor and paste in the following configuration.

{
"PowerShellPolicies": {
"ScriptBlockLogging": {
"EnableScriptBlockInvocationLogging": false,
"EnableScriptBlockLogging": true
}
},
"LogLevel": "verbose"
}

Test PowerShell script block logging

Testing the configuration is easy. From the command line, run the following:

PS /> { "log me!" }
"log me!"

Checking the logs on Windows

How do you know what entries to watch out for? The main event ID to watch out for is 4104. This is the ScriptBlockLogging entry for information that includes user and domain, logged date and time, computer host and the script block text.

Open Event Viewer and navigate to the following log location: Applications and Services Logs > Microsoft > Windows > PowerShell > Operational.

Click on events until you find the one from the test that is listed as Event ID 4104. Filter the log for this event to make the search quicker.

Windows Event 4104
Event 4104 in the Windows Event Viewer details PowerShell activity on a Windows machine.

On PowerShell Core on Windows, the log location is: Applications and Services Logs > PowerShellCore > Operational.

Log location on non-Windows systems

On Linux, PowerShell script block logging will log to syslog. The location will vary based on the distribution. For this tutorial, we use Ubuntu which has syslog at /var/log/syslog.

Run the following command to show the log entry; you must elevate with sudo in this example and on most typical systems:

sudo cat /var/log/syslog | grep "{ log me! }"

2019-08-20T19:40:08.070328-05:00 localhost powershell[9610]: (6.2.2:9:80) [ScriptBlock_Compile_Detail:ExecuteCommand.Create.Verbose] Creating Scriptblock text (1 of 1):#012{ "log me!" }#012#012ScriptBlock ID: 4d8d3cb4-a5ef-48aa-8339-38eea05c892b#012Path:

To set up a centralized server on Linux, things are a bit different since you’re using syslog by default. You can use rsyslog to ship your logs to a log aggregation service to track PowerShell activity from a central location.

Go to Original Article
Author:

Cloud adoption a catalyst for IT modernization in many orgs

One of the biggest changes for administrators in recent years is the cloud. Its presence requires administrators to migrate from their on-premises way of thinking.

The problem isn’t the cloud. After all, there should be less work if someone else looks after the server for you. The arrival of the cloud has brought to light some of the industry’s outdated methodologies, which is prompting this IT modernization movement. Practices in many IT shops were not as rigid or regimented before the cloud came along because external access was limited.

Changing times and new technologies spur IT modernization efforts

When organizations were exclusively on premises, it was easy enough to add finely controlled firewall rules to only allow certain connections in and out. Internal web-based applications did not need HTTPS — just plain HTTP worked fine. You did not have to muck around with certificates, which seem to always be difficult to comprehend. Anyone on your network was authorized to be there, so it didn’t matter if data was unencrypted. The risk versus the effort wasn’t worthwhile — a lot of us told ourselves — to bother with and the users would have no idea anyway.

You would find different ways to limit the threats to the organization. You could implement 802.1X, which only allowed authorized devices on the network. This reduced the chances of a breach because the attacker would need both physical access to the network and an approved device. Active Directory could be messy; IT had a relaxed attitude about account management and cleanup, which was fine as long as everyone could do their job.

Now that there is increased risk with exposing the company’s systems to the world via cloud, it’s no longer an option to keep doing things the same way just to get by.

The pre-cloud era allowed for a lot of untidiness and shortcuts, because the risk of these things affecting the business in a drastic way was smaller. Administrators who stepped into a new job would routinely inherit a mess from the last IT team. There was little incentive to clean things up; just keep those existing workloads running. Now that there is increased risk with exposing the company’s systems to the world via cloud, it’s no longer an option to keep doing things the same way just to get by.

One example of how the cloud forces IT practices to change is the default configuration when you use Microsoft’s Azure Active Directory. This product syncs every Active Directory object to the cloud unless you apply filtering. The official documentation states that this is the recommended configuration. Think about that: Every single overlooked, basic password that got leaked several years ago during the LinkedIn breach is now in the cloud for use by anyone in the world. Those accounts went from a forgotten mess pushed under the rug years ago to a ticking time bomb waiting for attackers to hit a successful login as they spin through their lists of millions of username and password combos.

Back on the HTTP/HTTPS side, users now want to work from home or anywhere they might have an internet connection. They also want to do it from any device, such as their personal laptop, mobile phone or tablet. Exposing internal websites was once — and still is in many scenarios — a case of poking a hole in the firewall and hoping for the best. With an unencrypted HTTP site, all data it pushed in and out to that endpoint, from anything the user sees to anything they enter such as username and password is at risk. Your users could be working from a free McDonald’s Wi-Fi connection or at any airport in the world. It’s not hard for attackers to set up fake relay access points and listen to all the data and read anything that is not encrypted. Look up WiFi Pineapple for more information about the potential risks.

How to accommodate your users and tighten security

As you can see, it’s easy to end up in a high-risk situation if IT focuses on making users happy instead of company security. How do you make the transition to a safer environment? At the high level, there’s several immediate actions to take:

  • Clean up Active Directory. Audit accounts, disable ones not in use, organize your organizational units so they are clear and logical. Implement an account management process from beginning to end.
  • Review your password policy. If you have no other protection, cycle your passwords regularly and enforce some level of complexity. Look at other methods for added protection such as multifactor authentication (MFA), which Azure Active Directory provides, which can do away with password cycling. For more security, combine MFA with conditional access, so a user in your trusted network or using a trusted device doesn’t even need MFA. The choice is yours.
  • Review and report on account usage. When something is amiss with account usage, you should know as soon as possible to take corrective action. Technologies such as the identity protection feature Azure Active Directory issues alerts and remediates on suspicious activity, such a login from a location that is not typical for that account.
  • Implement HTTPS on all sites. You don’t have to buy a certificate for each individual site to enable HTTPS. Save money and generate them yourself if the site is only for trusted computers on which you can deploy the certificate chain. Another option is to buy a wildcard certificate to use everywhere. Once the certificate is deployed, you can expose the sites you want with Azure Active Directory Application Proxy rather than open ports in your firewall. This gives the added benefit of forcing an Azure Active Directory login to apply MFA and identity protection before the user gets to the internal site, regardless of the device and where they are physically located.

These are a few of the critical aspects to think about when changing your mindset from on-premises to cloud. This is a basic overview of the areas to give a closer look. There’s a lot more to consider, depending on the cloud services you plan to use.

Go to Original Article
Author:

Implement automated employee onboarding with PowerShell

One of those most common tasks help desk technicians and system administrators handle is provisioning all the resources to onboard new employees.

Depending on the organization, these tasks may include creating Active Directory user accounts, creating home folders, provisioning new Office 365 mailboxes and setting up a VoIP extension in the phone system. With a little PowerShell coding, you can put together an automated employee onboarding script that does a majority of this work in little to no time.

To automate this process, it’s essential to define all the tasks. Many companies have a document that outlines the steps to onboard a new employee, such as the following:

  • create an Active Directory user account;
  • create a user folder on a file share; and
  • assign a mobile device.

When building a PowerShell script, start by researching if the basics required for automation are available. For example, does the system you’re using to assign a mobile device have an API? If not, then it can’t be completely automated. Rather than bypass this step, you can still add something to your script that sends an email to the appropriate person to complete the setup.

For other tasks that PowerShell can automate, you can start by scaffolding out some code in your script editor.

Build the framework for the script

Start by adding some essential functions to encapsulate each task. The following code is an example of build functions for each of the tasks we want to automate.

param(
[Parameter()]
[ValidateNotNullOrEmpty()]
[string]$CsvFilePath
)

function New-CompanyAdUser {
[CmdletBinding()]
param
(

)

}

function New-CompanyUserFolder {
[CmdletBinding()]
param
(

)

}

function Register-CompanyMobileDevice {
[CmdletBinding()]
param
(

)

}

function Read-Employee {
[CmdletBinding()]
param
(

)

}

This isn’t our final code. This is just a brainstorming exercise.

Add the code to receive input

Notice the param block at the top and the Read-Employee function. This function receives any type of input, such as a CSV file or database. When we create a function, it’s easy to modify the code if the method changes.

For now, we are using a CSV file to make the Read-Employee script below. By default, this function takes the CSV file path when the script runs.

function Read-Employee {
[CmdletBinding()]
param
(
[Parameter()]
[ValidateNotNullOrEmpty()]
[string]$CsvFilePath = $CsvFilePath
)

Import-Csv -Path $CsvFilePath

}

Add a Read-Employee reference below this function.

We have a CSV file from human resources that looks like this:

FirstName,LastName,Department
Adam,Bertram,Accounting
Joe,Jones,HR

We’ll give the script a name New-Employee.ps1 with the CsvFilePath parameter.

./New-Employee.ps1 -CsvFilePath './Employees.csv'

Developing the functions

Next, fill in the other functions. This is just an example but should give you a good idea of how you could build your code for the specifics your automated employee onboarding script should have. For more information on the creation of the New-CompanyAdUser function, you can find more information at this blog post.

param(
[Parameter()]
[ValidateNotNullOrEmpty()]
[string]$CsvFilePath
)

function New-CompanyAdUser {
[CmdletBinding()]
param
(
[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[pscustomobject]$EmployeeRecord
)

## Generate a random password
$password = [System.Web.Security.Membership]::GeneratePassword((Get-Random -Minimum 20 -Maximum 32), 3)
$secPw = ConvertTo-SecureString -String $password -AsPlainText -Force

## Generate a first initial/last name username
$userName = "$($EmployeeRecord.FirstName.Substring(0,1))$($EmployeeRecord.LastName))"

## Create the user
$NewUserParameters = @{
GivenName = $EmployeeRecord.FirstName
Surname = $EmployeeRecord.LastName
Name = $userName
AccountPassword = $secPw
}
New-AdUser @NewUserParameters

## Add the user to the department group
Add-AdGroupMember -Identity $EmployeeRecord.Department -Members $userName
}

function New-CompanyUserFolder {
[CmdletBinding()]
param
(
[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[pscustomobject]$EmployeeRecord
)

$fileServer = 'FS1'

$null = New-Item -Path "\\$fileServer\Users\$($EmployeeRecord.FirstName)$($EmployeeRecord.LastName)" -ItemType Directory

}

function Register-CompanyMobileDevice {
[CmdletBinding()]
param
(
[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[pscustomobject]$EmployeeRecord
)

## Send an email for now. If we ever can automate this, we'll do it here.
$sendMailParams = @{
'From' = '[email protected]'
'To' = '[email protected]'
'Subject' = 'A new mobile device needs to be registered'
'Body' = "Employee: $($EmployeeRecord.FirstName) $($EmployeeRecord.LastName)"
'SMTPServer' = 'smtpserver.something.local'
'SMTPPort' = '587'
}

Send-MailMessage @sendMailParams

}

function Read-Employee {
[CmdletBinding()]
param
(
[Parameter()]
[ValidateNotNullOrEmpty()]
[string]$CsvFilePath = $CsvFilePath
)

Import-Csv -Path $CsvFilePath

}

Read-Employee

Calling the functions

Once you build the functions, pass each of the employee records returned from Read-Employee to each function, as shown below.

$functions = 'New-CompanyAdUser','New-CompanyUserFolder','Register-CompanyMobileDevice'
foreach ($employee in (Read-Employee)) {
foreach ($function in $functions) {
& $function -EmployeeRecord $employee
}
}

By standardizing the function parameters to have a single parameter EmployeeRecord which coincides to a row in the CSV file, you can define the functions you want to call in an array and loop over each of them.

Click here to download the code used in this article on GitHub.

Go to Original Article
Author:

Know your Office 365 backup options — just in case

Exchange administrators who migrate their email to Office 365 reduce their infrastructure responsibilities, but they must not ignore areas related to disaster recovery, security, compliance and email availability.

Different businesses rely on different applications for their day-to-day operations. Healthcare companies use medical records to treat patients or a manufacturing plant needs its ERP system to track production. But generally speaking, most businesses, regardless of their vertical, rely on email to communicate with their co-workers and customers. If the messaging platform goes down for any amount of time, users and the business suffer. A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

IT pros tasked with all things related to Exchange Server administration — managing multiple email services, including system uptime; mailbox recoverability; system performance; maintenance; user setups; and general reactive system issues — will have to adjust when they move to Office 365. Many of the responsibilities related to system performance, maintenance and uptime become the responsibility of Microsoft. Unfortunately, not all of these outsourced activities meet the expectations of Exchange administrators. Some of them will resort to alternative methods to ensure their systems have the right protections to avoid serious disasters.

A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

To keep on-premises Exchange running with high uptime, Exchange admins rely on setting up the environment with adequate redundancies, such as virtualization with high availability, clustering and proper backup if a recovery is required. In a hosted Exchange model with Office 365, email administrators rely heavily on the hosting provider to manage those redundancies and ensure system uptime. However, despite the promised service-level agreements (SLAs) by Microsoft, there are still some gaps that Exchange administrators must plan for to get the same level of system availability and data protection they previously experienced with their legacy on-premises Exchange platform.

Hosted email in Exchange Online, which can be purchased as a stand-alone service or as part of Office 365, has certainly attracted many companies. Microsoft did not provide exact numbers in its most recent quarterly report, but it is estimated to be around 180 million Office 365 commercial seats. Despite the popularity of the platform, one would assume Microsoft would offer an Office 365 backup option at minimum for the email service. Microsoft does, but not in the way Exchange administrators know backup and disaster recovery.

Microsoft does not have backups for Exchange Online

Microsoft provides some level of recoverability with mailboxes stored in Exchange Online. If a user loses email, then the Exchange administrator can restore deleted email by restoring an entire mailbox with PowerShell or through the Outlook recycle bin.

The Undo-SoftDeletedMailbox PowerShell command recovers the deleted mailbox, but there are some limitations. The command is only useful when a significant number of folders have been deleted from a mailbox and the recovery attempt occurs within 30 days. After 30 days, the content is not recoverable.

Due to this limited backup functionality, many administrators look to third-party Office 365 backup vendors such as SkyKick, BitTitan, Datto and Veeam to expand their backup and recovery needs beyond the 30 days that Microsoft offers. At the moment, this is the only way for Exchange administrators to satisfy their organization’s back up and disaster recovery requirements.

Microsoft promises 99.9% uptime with email

No cloud provider is immune to outages and Microsoft is no different. Despite instances of service loss, Microsoft guarantees at least 99.9% uptime for Office 365. This SLA translates into no more than nine hours of downtime per year.

For most IT executives, this guarantee does not absolve them of the need to plan for possible downtime. Administrators should investigate the costs and the technical abilities of an email continuity service from vendors, including Mimecast, Barracuda or TitanHQ, to avoid trouble from unplanned outages.

Email retention policies can go a long way for sensitive content

The ability to define different type of data access and retention policies is just as important as backup and disaster recovery for organizations with compliance requirements.

Groups that need to prevent accidental email deletion will need to work with the Office 365 administrator to set up the appropriate on-hold policies or archiving configuration to protect that content. These are native features in Exchange Online that administrators must build their familiarity to ensure they understand how to meet the different legal requirements of the different groups in their organization.

Define backup retention policies to meet business needs

For most backup offerings for on-premises Exchange, storage is always a concern for administrators. Since it is generally the dictating factor behind the retention period of email backup, Exchange admins have to keep disk space in mind when they determine the best backup scheme for their organization. Hourly, daily, weekly, monthly and quarterly backup schedules are influenced by the amount of available storage.

Office 365 backup products for email from vendors such as SkyKick, Dropsuite, Acronis and Datto ease the concerns related to storage space. This gives the administrator a way to develop the best protection scheme for their company without the added worry of wondering when to purchase additional storage hardware to accommodate these backups.

Go to Original Article
Author:

Lessons learned from PowerShell Summit 2019

Most Windows administrators have at least dabbled with PowerShell to get started on their automation journey, but for more advanced practices, it helps to attend a conference, such as the PowerShell Summit.

For the second straight year, I attended the PowerShell + DevOps Global Summit in Bellevue, Wash., earlier this year. As an avid PowerShell user and evangelist, I greatly enjoy being around fellow community members to talk shop about how we use PowerShell in our jobs as well as catch the deep dive sessions.

This year was a bit different for me as I also presented a session, “Completely Automate Managing Windows Software … Forever,” to explain how I use Chocolatey with PowerShell to automate the deployment of third-party software in my full-time position.

There’s a lot of value in the sessions at PowerShell Summit. Administrators and developers who rely on PowerShell get a chance to learn something new and build their skill set. The videos from the sessions are on YouTube, but if you don’t attend in person you will miss out on the impromptu hallway discussions. These gatherings can be a great way to meet a lot of veteran IT professionals, community leads and even PowerShell team members. Jeffrey Snover, the inventor of PowerShell, was also in attendance.

In this article, I will cover my experiences at this year’s PowerShell Summit and some of the lessons learned during the week.

AWS Serverless Computing

Serverless computing is a very hot topic for many organizations that want to cut costs and reduce the work it takes to support a Windows deployment. With serverless computing, there is no need to manage a Windows Server machine and all its requisite setup and maintenance work. You can use an API to run PowerShell, and it will scale automatically.

I have not tried any sort of serverless computing, but it didn’t take much to see its potential and advantages during the demonstration.

Andrew Pearce, a senior systems development engineer at AWS, presented a session entitled “Unleash Your PowerShell With AWS Lambda and Serverless Computing.” Pearce’s talk covered how to use Amazon’s event-drive computing with PowerShell Core.

I have not tried any sort of serverless computing, but it didn’t take much to see its potential and advantages during the demonstration. Pearce explained that a PowerShell function can run from an event, such as when an image is placed in an AWS Simple Storage Service bucket, to convert the image to multiple resolutions depending on the organization’s need. Another possibility is to run a PowerShell function in response to an IoT event, such as someone ringing a doorbell.

Simple Hierarchy in PowerShell

Simple Hierarchy in PowerShell (SHiPS) is an area in PowerShell that looks interesting, but one that I have not had much experience with. The concepts of SHiPs is similar to traversing a file system in PowerShell; a SHiPs provider can create a provider that looks like a hierarchical file system.

Providers have been a part of PowerShell since version 1.0 and give access to data and components not easily reached from the command line, such as the Windows certificate store. One common example is the PowerCLI data store provider to let users to access a data store in vSphere.

You can see what providers you have on a system with the Get-PSProvider command.

PowerShell providers
The Get-PSProvider cmdlet lists the providers available on the system.

Another familiar use of a PowerShell provider is the Windows registry, which PowerShell can navigate like a traditional file system.

Glenn Sarti of Puppet gave a session entitled “How to Become a SHiPS Wright – Building With SHiPS.” Attempting to write your own provider from scratch is a complex undertaking that ordinarily requires advanced programming skill and familiarity with the PowerShell software development kit. SHiPs attempts to bypass this complexity and make provider development easier by writing provider code in PowerShell. Sarti explained that SHiPs reduces the amount of code to write a module from thousands of lines to less than 20 in some instances.

In his session, Sarti showed how to use SHiPs to create an example provider using the PowerShell Summit agenda and speakers as data. After watching this session, it sparked an idea to create a provider for a Chocolatey repository as if it were a file system.

PowerShell Remoting Internals

In this deep dive session, Paul Higinbotham, a member of the PowerShell team, covered how PowerShell remoting works. Higinbotham explained PowerShell’s five different transports to remoting, or protocols, it can use to run remote commands on another system.

In Windows, the most popular is WinRM since PowerShell is most often used on Windows. For PowerShell Core, OpenSSH is the best option for cross-platform use since both Windows and Linux. The advantage here is you can run PowerShell Core scripts from Windows to Linux and vice versa. Since I work in a mostly Linux environment, using Secure Shell (SSH) and PowerShell together makes a lot of sense.

This session taught me about the existence of interprocess communication remoting in PowerShell Core. This is accomplished with the command Enter-PSHostProcess and the main perk being the ability to debug scripts on a remote system from your local machine.

Toward the end of the presentation, Higinbotham shared a great diagram of the remoting architecture and goes over what each component does during a remoting session.

Go to Original Article
Author:

How to automate patch management in Windows

Patch Tuesday comes every month like clockwork for IT administrators. For routine jobs such as patch management in Windows, it’s essential to use automation to make this chore more tolerable.

There are many products to help you deploy Windows patches to your systems, but they’re usually expensive. If you can’t afford these offerings, another option is to roll your own automated Windows patching routine. Using PowerShell, IT teams can test, deploy and verify Windows updates across hundreds of machines using nothing but some PowerShell kung fu and some prebuilt modules.

Prerequisites for automated patch management in Windows

To follow along, you should have the following prerequisites set up:

  • Windows PowerShell 5.1 on a client;
  • PowerShell Remoting available on the remote systems to patch;
  • logged in or have access to an account with local administrator permissions on the remote systems;
  • an Active Directory environment;
  • the Active Directory PowerShell module on your client; and
  • a Windows Server Update Services (WSUS) server installed and set up to manage your systems.

Set up a test environment

As most administrators know, you never push out patches directly to your production systems, which means you need to set up a test environment. You should configure this with a sampling of the operating systems and configurations of all systems that receive patches.

To determine what you have in your inventory, use the following script. It queries all Active Directory computers in the domain and groups them by the operating system.

$computerCount = 2
$properties = @( 
    @{Name='OperatingSystem';Expression={$_.Name}},
    @{Name='TotalCount';Expression={$_.Group.Count}},
    @{Name='TestComputers';Expression={$_.Group | Select-Object -ExpandProperty Name -First $computerCount }} 
) 
$testGroups = Get-ADComputer -Filter * -Properties OperatingSystem | Group-Object -Property OperatingSystem | Select-Object -Property $properties $testGroups

When the script runs, it groups the output by the type of machines and how many there are.

OperatingSystem                TotalCount TestComputers
---------------                ---------- -------------
Windows Server 2016 Datacenter          3 {SRDC01, SCRIPTRUNNER01}

Now that you know what operating systems you have, you can either convert the physical machines to virtual ones or perhaps build new virtual machines in your hypervisor of choice. You can do that with PowerShell, but that is outside of the scope of this article. This tutorial will continue on the assumption you executed the conversion and are ready to proceed.

Deploying Windows updates

Once you’ve got your test VMs set up, check to see if there are new patches available. You will need to use the WSUS server to find this information.

When you’ve discovered the Microsoft Knowledge Base (KB) IDs of all patches to test, you can deploy these patches using the PSWindowsUpdate module. To download and install, use this command:

Install-Module PSWindowsUpdate

Next, deploy the patches, but first, you’ll need to ensure you’ve got the appropriate firewall port exceptions for the Windows Firewall enabled. Here’s a quick PowerShell command to enable it on remote systems:

New-NetFirewallRule -DisplayName "Allow PSWindowsUpdate" -Direction Inbound -Program "%SystemRoot%System32dllhost.exe" -RemoteAddress Any -Action Allow -LocalPort 'RPC' -Protocol TCP

Next, you can run a quick test by running a simple command such as Get-WUHistory to see if it returns an error or if it returns a patch history. If it’s the latter, then you can proceed.

Now it’s time to deploy the Windows patches to the test groups. For this tutorial, deploy KB ID KB4091664. Start by copying the PSWindowsUpdate module to the remote computers, and then initiate the install. Also, schedule a reboot during the maintenance window. In this instance, that’s a time two hours from now.

foreach ($computer in $testGroups.TestComputers) {
    Copy-Item -Path "$Env:PROGRAMFILESWindowsPowerShellModulesPSWindowsUpdate" -Destination "\$computerc$Program FilesWindowsPowerShellModules" -Recurse
    Install-WindowsUpdate -ComputerName $computer -KBArticleIds 'KB4091664' -Schedule (Get-Date).AddHours(2)
}

X ComputerName Result     KB          Size Title
- ------------ ------     --          ---- -----
1 scriptrun... Accepted   KB4091664    1MB 2018-09 Update for Windows Server 2016 for x64-based Systems (KB4091664)

This script starts the patch installation on each computer. To monitor the progress, you can use the Get-WUHistory command.

Get-WUHistory -ComputerName scriptrunner01 -Last 1
ComputerName Operationname  Result     Date                Title
------------ -------------  ------     ----                -----
scriptrun... Installation   InProgress 4/4/2019 9:31:21 PM 2018-09 Update for Windows Server 2016 for x64-based Systems (KB4091664)

Dive deeper into the PSWindowsUpdate module

This article just covers the basics of rolling out an automated way to handle patch management in Windows with PowerShell. PSWindowsUpdate is a great time-saver with extensive functionality. It’s worth investigating the help in this PowerShell module to see how you can customize it based on your needs.

Go to Original Article
Author:

How Windows Admin Center stacks up to other management tools

Microsoft took a lot of administrators by surprise when it released Windows Admin Center, a new GUI-based management tool, last year. But is it mature enough to replace third-party offerings that handle some of the same tasks?

Windows Admin Center is a web-based management environment for Windows Server 2012 and up that exposes roughly 50% of the capabilities of the traditional Microsoft Management Console-based GUI environment. Most common services — DNS, Dynamic Host Configuration Protocol, Event Viewer, file sharing and even Hyper-V — are available within the Windows Admin Center, which can be installed on a workstation with a self-hosted web server built in, or on a traditional Windows Server machine using IIS.

It also covers several Azure management scenarios, as well, including managing Azure virtual machines when you link your cloud subscription to the Windows Admin Center instance you use.

Windows Admin Center dashboard
Among its many features, the Windows Admin Center dashboard provides an overview of the selected Windows machine, including the current state of the CPU and memory.

There are a number of draws for Windows Admin Center. It’s free and designed to be developed out of band, or shipped as a web download, rather than included in the Windows Server product. So, Microsoft can update it more frequently than the core OS.

Microsoft said, over time, most of the Windows administrative GUI tools will move to Windows Admin Center. It makes sense to spin up an instance of it on a management workstation, an old server or even a lightweight VM on your virtualization infrastructure. Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

How does Windows Admin Center compare with similar products on the market? Here’s a look at the pros and cons of each.

Goverlan Reach

Goverlan Reach is a remote systems management and administration suite for remote administration of virtually any aspect of a Windows system that is configurable via Windows Management Instrumentation. Goverlan is a fat client, normal Windows application, not a web app, so it runs on a regular workstation. Goverlan provides one-stop shopping for Windows administration in a reasonably well-laid-out interface. There is no Azure support.

For the extra money, you get a decent engine that allows you to automate certain IT processes and create a runbook of typical actions you would take on a system. You also get built-in session capturing and control without needing to connect to each desktop separately, as well as more visibility into software updates and patch management for not only Windows, but also major third-party software such as Chrome, Firefox and Adobe Reader.

Goverlan Reach has three editions. The Standard version is $29 per month and offers remote control functions. The Professional version costs $69 per month and includes Active Directory management and software deployment. The Enterprise version with all the advanced features costs $129 per month and includes compliance and more advanced automation abilities.

Editor’s note: Goverlan paid the writer to develop content marketing materials for its product in 2012 and 2013, but there is no ongoing relationship.

PRTG Network Monitor

Paessler’s PRTG Network Monitor tracks the uptime, health, disk space, and performance of servers and devices on your network, so you proactively respond to issues and prevent downtime.

[embedded content]
Managing Windows Server 2019 with Windows Admin Center.

PRTG monitors mail servers, web servers, database servers, file servers and others. It has sensors built in for the attendant protocols of each kind of server. You can build your own sensors to monitor key aspects of homegrown applications. PRTG logs all this monitoring information for analysis to build a baseline performance profile to develop ways to improve stability and performance on your network.

When looking at how PRTG stacks up against Windows Admin Center, it’s only really comparable from a monitoring perspective. The Network Monitor product offers little from a configuration standpoint. While you could install the alerting software and associated agents on Azure virtual machines in the cloud, there’s no real native cloud support; it treats the cloud virtual machines simply as another endpoint. 

It’s also a paid-for product, starting at $1,600 for 500 sensors and going all the way up to $60,000 for unlimited sensors. It does offer value and is perhaps the best monitoring suite out there from an ease-of-use standpoint, but most shops would most likely choose it in addition to Windows Admin Center, not in lieu of it.

SolarWinds

Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

SolarWinds has quite a few products under its systems management umbrella, including server and application monitoring; virtualization administration; storage resource monitoring; configuration and performance monitoring; log analysis; access right auditing; and up/down monitoring for networks, servers and applications. While there is some ability to administer various portions of Windows, with the Access Rights Manager or Virtualization Manager, these SolarWinds products are very heavily tilted toward monitoring, not administration.

The SolarWinds modules all start with list prices anywhere from $1,500 to $3,500, so you quickly start incurring a substantial expense to purchase the modules needed to administer all the different detailed areas of your Windows infrastructure. While these products are surely more full-featured than Windows Admin Center, the delta might not be worth $3,000 to your organization. For my money, PRTG becomes a better value for the money if monitoring is your goal.

Nagios

Nagios has a suite of tools to monitor infrastructure, from individual systems to protocols and applications, along with database monitoring, log monitoring and, perhaps important in today’s cloud world, bandwidth monitoring.

Nagios has long been available as an open source tool that’s very powerful, and the free version, Nagios Core, certainly has a place in any moderately complex infrastructure. The commercial versions of Nagios XI — $1,995 for standard and $3,495 for enterprise — have lots of shine and polish, but lack any sort of interface to administer systems.

The price is right, but its features still lag behind

There is clearly a place for Windows Admin Center in every Windows installation, given it is free, very functional although there are some bugs that will get worked out over time — and gives you a vendor-supported way of both monitoring and administering Windows.

However, Windows Admin Center lacks quite a bit of monitoring prowess and also doesn’t address all potential areas of Windows administration. There is no clear-cut winner out of all the profiled tools in this article. If anything, Windows Admin Center should be thought of as an additional tool to use in conjunction with some of these other products.

Go to Original Article
Author: