Category Archives: Expert advice on Windows based systems and hardware

Expert advice on Windows based systems and hardware

How to install the Windows Server 2019 VPN

Many organizations rely on a virtual private network, particularly those with a large number of remote workers who need access to resources.

While there are numerous vendors selling their VPN products in the IT market, Windows administrators also have the option to use the built-in VPN that comes with Windows Server. One of the benefits of using Windows Server 2019 VPN technology is there is no additional cost to your organizations once you purchase the license.

Another perk with using a Windows Server 2019 VPN is the integration of the VPN with the server operating system reduces the number of infrastructure components that can break. An organization that uses a third-party VPN product will have an additional hoop the IT staff must jump through if remote users can’t connect to the VPN and lose access to network resources they need to do their jobs.

One relatively new feature in Windows Server 2019 VPN functionality is the Always On VPN, which some users in various message boards and blogs have speculated will eventually replace DirectAccess, which remains supported in Windows Server 2019. Microsoft cites several advantages of Always On VPN, including granular app- and traffic-based rules to restrict network access, support for both RSA and elliptic curve cryptography algorithms, and native Extensible Authentication Protocol support to enable the use of a wider variety of advanced authentication methods.

Microsoft documentation recommends organizations that currently use DirectAccess to check Always On VPN functionality before migrating their remote access processes.

The following transcript for the video tutorial by contributor Brien Posey explains how to install the Windows Server 2019 VPN role. 

In this video, I want to show you how to configure Windows Server 2019 to act as a VPN server.

Right now, I’m logged into a domain joined Windows Server 2019 machine and I’ll get the Server Manager open so let’s go ahead and get started.

The first thing that I’m going to do is click on Manage and then I’ll click on Add Roles and Features.

This is going to launch the Add Roles and Features wizard.

I’ll go ahead and click Next on the Before you begin screen.

For the installation type, I’m going to choose Role-based or feature-based installation and click Next. From there I’m going to make sure that my local server is selected. I’ll click Next.

Now I’m prompted to choose the server role that I want to deploy. You’ll notice that right here we have Remote Access. I’ll go ahead and select that now. Incidentally, in the past, this was listed as Routing and Remote Access, but now it’s just listed as a Remote Access. I’ll go ahead and click Next.

I don’t need to install any additional feature, so I’ll click Next again, and I’ll click Next [again].

Now I’m prompted to choose the Role Services that I want to install. In this case, my goal is to turn the server into a VPN, so I’m going to choose DirectAccess and VPN (RAS).

There are some additional features that are going to need to be installed to meet the various dependencies, so I’ll click Add Features and then I’ll click Next. I’ll click Next again, and I’ll click Next [again].

I’m taken to a confirmation screen where I can make sure that all of the necessary components are listed. Everything seems to be fine here, so I’ll click Install and the installation process begins.

So, after a few minutes the installation process completes. I’ll go ahead and close this out and then I’ll click on the Notifications icon. We can see that some post-deployment configuration is required. I’m going to click on the Open the Getting Started Wizard link.

I’m taken into the Configure Remote Access wizard and you’ll notice that we have three choices here: Deploy both DirectAccess and VPN, Deploy DirectAccess Only and Deploy VPN Only. I’m going to opt to Deploy VPN Only, so I’ll click on that option.

I’m taken into the Routing and Remote Access console. Here you can see our VPN server. The red icon indicates that it hasn’t yet been configured. I’m going to right-click on the VPN server and choose the Configure and Enable Routing and Remote Access option. This is going to open up the Routing and Remote Access Server Setup Wizard. I’ll go ahead and click Next.

I’m asked how I want to configure the server. You’ll notice that the very first option on the list is Remote access dial-up or VPN. That’s the option that I want to use, so I’m just going to click Next since it’s already selected.

I’m prompted to choose my connections that I want to use. Rather than using dial-up, I’m just going to use VPN, so I’ll select the VPN checkbox and click Next.

The next thing that I have to do is tell Windows which interface connects to the internet. In my case it’s this first interface, so I’m going to select that and click Next.

I have to choose how I want IP addresses to be assigned to remote clients. I want those addresses to be assigned automatically, so I’m going to make sure Automatically is selected and click Next.

The next prompt asks me if I want to use a RADIUS server for authentication. I don’t have a RADIUS server in my own organization, so I’m going to choose the option No, use Routing and Remote Access to authenticate connection requests instead. That’s selected by default, so I can simply click Next.

I’m taken to a summary screen where I have the chance to review all of the settings that I’ve enabled. If I scroll through this, everything appears to be correct. I’ll go ahead and click Finish.

You can see that the Routing and Remote Access service is starting and so now my VPN server has been enabled.

View All Videos

Go to Original Article
Author:

AWS Outposts vs. Azure Stack vs. HCI

Giants Amazon and Microsoft offer cloud products and services that compete in areas usually reserved for the strengths that traditional hyper-converged infrastructure platforms bring to the enterprise IT table. These include hybrid cloud offerings AWS Outposts, which Amazon made generally available late last year, and Azure Stack from Microsoft.

An integrated hardware and software offering, Azure Stack is designed to deliver Microsoft Azure public cloud services to enable enterprises to construct hybrid clouds in a local data center. It delivers IaaS and PaaS for organizations developing web apps. By sharing its code, APIs and management portal with Microsoft Azure, Azure Stack provides a common platform to address hybrid cloud issues, such as maintaining consistency between cloud and on-premises environments. Stack is for those who want the benefits of a cloud-like platform but must keep certain data private due to regulations or some other constraint.

AWS Outposts is Amazon’s on-premises version of its IaaS offering. Amazon targets AWS Outposts at those who want to run workloads on Amazon Web Services, but instead of in the cloud, do so inside their own data centers to better meet regulatory requirements and, for example, to reduce latency.

Let’s delve deeper into AWS Outposts vs. Azure Stack to better see how they compete with each other and your typical hyper-converged infrastructure (HCI) deployment.

hybrid cloud storage use cases

What is AWS Outposts?

AWS Outposts is Amazon’s acknowledgment that most enterprise class organizations prefer hybrid cloud to a public cloud-only model. Amazon generally has acted solely as a hyperscale public cloud provider, leaving its customers’ data center hardware needs for other vendors to handle. With AWS Outposts, however, Amazon is — for the first time — making its own appliances available for on-premises use.

AWS Outposts customers can run AWS on premises. They can also extend their AWS virtual private clouds into their on-premises environments, so a single virtual private cloud can contain both cloud and data center resources. That way, workloads with low-latency or geographical requirements can remain on premises while other workloads run in the Amazon cloud. Because Outposts is essentially an on-premises extension of the Amazon cloud, it also aims to ease the migration of workloads between the data center and the cloud.

What is Microsoft Azure Stack?

Although initially marketed as simply a way to host Azure services on premises, Azure Stack has evolved into a portfolio of products. The three products that make up the Azure Stack portfolio include Azure Stack Edge, Azure Stack Hub and Azure Stack HCI.

Azure Stack Edge is a cloud-managed appliance that enables you to run managed virtual machine (VM) and container workloads on premises. While this can also be done with Windows Server, the benefit to using Azure Stack Edge is workloads can be managed with a common tool set, whether they’re running on premises or in the cloud.

Azure Stack Hub is used for running cloud applications on premises. It’s mostly for situations in which data sovereignty is required or where connectivity isn’t available.

As its name implies, Azure Stack HCI is a version of Azure Stack that runs on HCI hardware.

AWS Outposts vs. Azure Stack vs. HCI

To appreciate how AWS Outposts competes with traditional HCI, consider common HCI use cases. HCI is often used as a virtualization platform. While AWS Outposts will presumably be able to host Elastic Compute Cloud virtual machine instances, the bigger news is that Amazon is preparing to release a VMware-specific version of Outposts in 2020. The VMware Cloud on AWS Outposts will allow a managed VMware software-defined data center to run on the Outposts infrastructure.

Organizations are also increasingly using HCI as a disaster recovery platform. While Amazon isn’t marketing Outposts as a DR tool, the fact that Outposts acts as a gateway between on-premises services and services running in the Amazon cloud means the platform will likely be well positioned as a DR enabler.

Many organizations have adopted hyper-converged systems as a platform for running VMs and containers. Azure Stack Edge may end up displacing some of those HCIs if an organization is already hosting VMs and containers in the Azure cloud. As for Azure Stack Hub, it seems unlikely that it will directly compete with HCI, except possibly in some specific branch office scenarios.

The member of the Azure Stack portfolio that’s most likely to compete with traditional hyper-convergence is Azure Stack HCI. It’s designed to run scalable VMs and provide those VMs with connectivity to Azure cloud services. These systems are being marketed for use in branch offices and with high-performance workloads.

Unlike first-generation HCI systems, Azure Stack HCI will provide scalability for both compute and storage. This could make it a viable replacement for traditional HCI platforms.

In summary, when it comes to AWS Outposts vs. Azure Stack or standard hyper-convergence, all three platforms have their merits, without any one being clearly superior to the others. If an organization is trying to choose between the three, then my advice would be to choose the platform that does the best job of meshing with the existing infrastructure and the organization’s operational requirements. If the organization already has a significant AWS or Azure footprint, then Outposts or Azure Stack would probably be a better fit, respectively. Otherwise, traditional HCI is probably going to entail less of a learning curve and may also end up being less expensive.

Go to Original Article
Author:

On-premises server monitoring tools meet business needs, budget

Although the market has shifted and more vendors are providing cloud-based monitoring, there are still a wide range of feature-rich server monitoring tools for organizations that must keep their workloads on site for security and compliance reasons.  

Here we examine open source and commercial on-premises server monitoring tools from eight vendors. Although these products broadly achieve the same IT goals, they differ in their approach, complexity of setup — including the ongoing aspects of maintenance and licensing — and cost. 

Cacti

Cacti is an open source network monitoring and graphing front-end application for RRDtool, an industry-standard open source data logging tool. RRDtool is the data collection portion of the product, while Cacti handles network graphing for the data that’s collected. Since both Cacti and RRDtool are open source, they may be practical options for organizations that are on a budget. Cacti support is community-driven.

Cacti can be ideal for organizations that already have RRDtool in place and want to expand on what it can display graphically. For organizations that don’t have RRDtool installed, or aren’t familiar with Linux commands or tools, both Cacti and RRDtool could be a bit of a challenge to install, as they don’t include a simple wizard or agents. This should be familiar territory for Linux administrators, but may require additional effort for Windows admins. Note that Cacti is a graphing product and isn’t really an alerting or remediation product. 

ManageEngine Applications Manager

The ManageEngine system is part of an extensive line of server monitoring tools that include application-specific tools as well as cloud and mobile device management. The application monitoring framework enables organizations to purchase agents from various vendors, such as Oracle and SAP, as well as customer application-specific tools. These server monitoring tools enable admins to perform cradle-to-grave monitoring, which can help them troubleshoot and resolve application server issues before they impact end-user performance. ManageEngine platform strengths include its licensing model and the large number of agents available. Although the monitoring license per device is all-inclusive for interfaces or sensors needed per device, the agents are sold individually.

Thirty-day trials are available for many of the more than 100 agents. Licensing costs range from less than $1,000 for 25 monitors and one user to more than $7,000 for 250 monitors with one user and an additional $245 per user. Support costs are often rolled into the cost of the monitors. This can be ideal for organizations that want to make a smaller initial investment and grow over time.

Microsoft System Center Operations Manager

The product monitors servers, enterprise infrastructure and applications, such as Exchange and SQL, and works with both Windows and Linux clients. Microsoft System Center features include configuration management, orchestration, VM management and data protection. System Center isn’t as expansive on third-party applications as it is with native Microsoft applications. System Center is based on core licensing to match Server 2016 and later licensing models.

The base price for Microsoft System Center Operations Manager starts at $3,600, assuming two CPUs and 16 cores total and can be expanded with core pack licenses. With Microsoft licensing, the larger the environment in terms of CPU cores, the more a customer site can expect to pay. While Microsoft offers a 180-day trial of System Center, this version is designed for the larger Hyper-V environments. Support is dependent on the contract the organization selects.  

Nagios Core

Nagios Core is free open source software that provides metrics to monitor server and network performance. Nagios can help organizations provide increased server, services, process and application availability. While Nagios Core comes with a graphical front end, the scope of what it can monitor is somewhat limited. But admins can deploy additional community-provided front ends that offer more views and additional functionality. Nagios Core natively installs and operates on Linux systems and Unix variants.

For additional features and functionality, the commercial Nagios XI product offers true dashboards, reporting, GUI configuration and enhanced notifications. Pricing for this commercial version ranges from less than $7,000 for 500 nodes and an additional $1,500 per enterprise for reporting and capacity planning tools. In addition to agents for OSes, users can also add network monitoring for a single point of service. Free 60-day trials and community support are available for the products that work with the free Nagios Core download.

Opsview

Opsview system monitoring software includes on-premises agents as well as agents from all the major cloud vendors. While the free version provides 25 hosts to monitor, the product’s main benefit is that it can support both SMBs and the enterprise. Pricing for a comprehensive offering that includes 300 hosts, reporting, multiple collectors and network analyzer is less than $20,000 a year, depending on the agents selected.  

Enterprise packages are available via custom quote. The vendor offers both on-premises and cloud variations. The list of agents Opsview can monitor is one of the most expansive of any of the products, bridging cloud, application, web and infrastructure. Opsview also offers a dedicated mobile application. Support for most packages is 24/7 and includes customer portals and a knowledgebase.

Paessler PRTG Network Manager

PRTG can monitor from the infrastructure to the application stack. The licensing model for PRTG Network Monitor follows a sensor model format over a node, core or host model. This means a traditional host might have more than 20 sensors monitoring anything from CPU to bandwidth. Services range from networking and bandwidth monitoring to other more application-specific services such as low Microsoft OneDrive or Dropbox drive space. A fully functional 30-day demo is available and pricing ranges from less than $6,000 for 2,500 sensors to less than $15,000 for an unlimited number of sensors. Support is email-based.

SolarWinds Server and Application Monitor

SolarWinds offers more than 1,000 monitoring templates for various applications and systems, such as Active Directory, as well as several virtualization platforms and cloud-based applications. It also provides dedicated virtualization, networking, databases and security monitoring products. In addition to standard performance metrics, SolarWinds provides application response templates to help admins with troubleshooting. A free 30-day trial is available. Pricing for 500 nodes is $73,995 and includes a year of maintenance.  

Zabbix

This free, open source, enterprise-scale monitoring product includes an impressive number of agents that an admin can download. Although most features aren’t point and click, the dashboards are similar to other open source platforms and are more than adequate. Given the free cost of entry and the sheer number of agents, this could be an ideal product for organizations that have the time and Linux experience to bring it online. Support is community-based and additional support can be purchased from a reseller.

The bottom line on server monitoring tools

The products examined here differ slightly in size, scope and licensing model. Outside of the open source products, many commercial server monitoring tools are licensed by node or agent type. It’s important that IT buyers understand all the possible options when getting quotes, as they can be difficult to understand.

Pricing varies widely, as do the features of the dashboards of the various server monitoring tools. Ensure the staff is comfortable with the dashboard and alerting functionality of each system as well as mobile ability and notifications. If an organization chooses an open source platform, keep in mind that the installation could require more effort if the staff isn’t Linux savvy.  

The dashboards for the open source monitors typically aren’t as graphical as the paid products, but that’s part of the tradeoff with open source. Many of the commercial products are cloud-ready or have that ability, so even if an organization doesn’t plan to monitor its servers in the cloud today, they can take advantage of this technology in the future. 

Go to Original Article
Author:

How to navigate a ransomware recovery process

If your defenses and backups fail despite your best efforts, your ransomware recovery effort can take one of several paths to restore normalcy to your organization.

Ransomware is bad enough. Don’t rush to bring systems and workloads back online and cause additional problems. The first item on your agenda is to take inventory of what still functions and what needs repairs. This has to be done quickly, but without mistakes. Management will want to know what needs to be done, but you can’t give a report until you have a full understanding. While you don’t need to break down every single server, you will need to have everything categorized. Think Active Directory, file servers, backups, networking infrastructure, email and communication, and production servers to start.

Take stock of the situation

The list of affected systems and VMs won’t be comprehensive. You have to start with machines that are a priority, and production servers are not in this case. If Active Directory is down, then it’s a safe bet most of your production servers — and the IT infrastructure — won’t be running correctly even if they weren’t directly affected.

To start with a ransomware recovery effort, check your backups first before anywhere else. Too many folks have deleted encrypted VMs only to find the malware wiped out their backup systems and end up going from bad to worse. Mistakes happen when you rush.

A somewhat easy path of restoring servers does exist if your backups are intact, current and operational. The restoration process needs to be tested before you delete any VMs. Rather than removing affected machines, try relocating them to lower-tier storage, external storage or even local storage on a host. Your goal is to get the encrypted VMs out of the way to give yourself space to work, then try the restores and get the VMs running before you remove their encrypted counterpart.

It might be time to make difficult choices

If the attack corrupted your backup system or the ransomware recovery effort failed, then someone above your pay grade will have to make some decisions. You will have to have a few difficult conversations, partly because the responsibility of the backups — and their reliability — rested on you. It’s possible it’s not entirely your fault for different reasons, such as not getting proper funding. This will have to be a conversation for a later time. At the moment, it’s time to make a decision: Pay the ransom, rebuild the systems or file a report.

Reporting requires the involvement of senior management and the company legal team. If you work for a government entity or public company, then you might have very specific guidelines that you must follow for legal reasons. If you work for a private company, then you still have possible legal issues with your customers about what you can and cannot disclose. No matter what you say, it will not be taken well. You want to be honest with your customers, but you also need to be mindful and limit how much data you share publicly.

The other aspect to reporting involves the authorities. Your organization might not even have been the intended target if you were hit by an older ransomware variant. If that’s the case, it’s possible there might be a decryption tool. It’s a long shot, but something worth check before you rebuild from scratch.

While distasteful, paying the ransomware is also an option. You need to consider how much will it cost to rebuild and recover versus handing over the ransom. It’s not an easy call to make because a payment does not come with any guarantees.

Most companies that pay the ransom typically don’t disclose that they paid or that they were even attacked. I suspect most organizations get their data unlocked, otherwise the ransomware business model would collapse.

The challenge with rebuilding is the effort involved. There are relatively few companies that have people who fully understand how every aspect of their environments work. Many IT infrastructures are the combined result of in-house experts and outside consultants. People install systems and take that knowledge with them when they leave. Their replacements learn how to keep these systems online, but that is very different from installing or building them from scratch. Repairing Active Directory is a challenge, but to rebuild an Active Directory with thousands of users and groups with permissions from documentation — with any luck — is next to impossible unless you have a lot of time and expertise.

Recovering from a ransomware attack is not an easy task, because not every situation is identical. If your defenses and backup recovery fail, the reconstruction effort will not be easy or cheap. You will either have to pay the ransom or spend money in overtime and consultants to rebuild mission-critical systems. Chances are your customers will find out what is happening during this recovery process, so you’ll have to have a communication plan and a single point of contact for the sake of consistency.

Ransomware isn’t something just for the IT department to handle; the decisions and the road to recovery will involve several stakeholders and real costs. Plan ahead and map out your steps to avoid rushing into bad choices that can’t be reversed.

Go to Original Article
Author:

PowerShell tutorials capture attention of admins in 2019

As 2019 reaches its end, it’s time to look back at the tips and tutorials published this year that mattered the most to the Windows Server audience.

Microsoft has redoubled its efforts to make PowerShell the overarching management tool for workloads no matter where they reside. Interest in automation and PowerShell tutorials that explain how to streamline everyday tasks continue to resonate with readers. In this top-five compilation of the 2019 tips that picked up the most page views, nearly all the articles focus on PowerShell, from learning advanced text manipulation techniques to plumbing the features in the newer, open source version initially dubbed PowerShell Core. But the article that claimed the top spot indicates many administrators have their eyes on a relatively new way to manage resources in their organization.

5. Windows Compatibility module expands PowerShell Core reach

The first PowerShell Core release, version 6.0, arrived in January 2018, but it was a step back in many ways for administrators who had been used to the last Windows PowerShell version, 5.1. This new PowerShell version, developed to also run on the other major operating system platforms of Linux and macOS, lost a fair amount of functionality due to the switch from the Windows-based .NET Framework to the cross-platform .NET Core. The end result was a fair number of cmdlets administrators needed to do their jobs did not run on PowerShell Core.

With any project of this size, there will be growing pains. Administrators can continue to use Windows PowerShell, which Jeffrey Snover said will always be supported by Microsoft and should serve IT workers faithfully for many years to come. But to ease this transition, the PowerShell team released a Windows Compatibility module in late 2018 to close the functionality gap between the Windows and open source versions of PowerShell. This tip digs into the background of the module and how to use it on PowerShell Core to work with some previously incompatible cmdlets.

4. How to use the PowerShell pending reboot module

Among the many perks of PowerShell is its extensibility. Administrative functions that were once out of reach — or laborious to accomplish — can magically appear once you download a module from the PowerShell Gallery. Install a module and you get several new cmdlets to make your administrative life just a bit easier.

For example, after Patch Tuesday rolls around and you’ve applied Microsoft’s updates to all your systems, the patching process generally is not complete — and the system not fully protected from the latest threats — until the machine reboots. A Microsoft field engineer developed a pending reboot module that detects if a Windows system has not rebooted. These insights can help you see which users might require a nudge to restart their machines to make sure your patching efforts don’t go for naught. This tip explains how to install and use the pending reboot cmdlet.

3. How to configure SSL on IIS with PowerShell    

Among its many uses in the enterprise, Windows Server also functions as a web server. Microsoft shops can use the Internet Information Services role in Windows Server to serve up content and host web applications for use across the internet or in a company’s private intranet.

To keep threat actors from sniffing out traffic between your IIS web server and the clients, add HTTPS to encrypt data transmissions to and from the server. HTTPS works in tandem with Transport Layer Security (TLS), the more secure ancestor to Secure Sockets Layer (SSL). Most in IT still refer to the certificates that facilitate the encrypted transmissions between servers and clients as SSL certificates. This tip explains how to use PowerShell to deploy a self-signed TLS/SSL certificate to an IIS website, which can come in handy if you spin up a lot of websites in your organization.

2. Hone your PowerShell text manipulation skills

It might seem like a basic ability, but learning how to read text from and write text to files using PowerShell opens up more advanced avenues of automation. There are several cmdlets tailored for use with text files to perform several tasks, including importing text into a file that can then be manipulated for other uses. It’s helpful to know that while text might look the same, Windows PowerShell and PowerShell Core have different ways of dealing with encoding. This tip covers some of the finer details of working with text in PowerShell to broaden your automation horizons.

1. Azure AD Premium P1 vs. P2: Which is right for you?

Most organizations require some way to manage their resources — from user accounts to physical devices, such as printers — and Active Directory has been the tool for the job for many years. Based on Windows Server, the venerable on-premises identity and access management platform handles the allocation and permissions process to ensure users get the right permissions to access what they need.

Microsoft unveiled its cloud-based successor to Active Directory, calling it Azure Active Directory, in 2013. While similar, the two products are not a straight swap. If you use Microsoft’s cloud, either for running VMs in Azure or using the collaboration apps in Office 365, then you’ll use Azure Active Directory to manage resources on those platforms. This tip digs into some of the permutations between the two higher-end editions of Azure Active Directory to help you decide which one might work best for your organization.

Go to Original Article
Author:

Using wsusscn2.cab to find missing Windows updates

Keeping your Windows Server and Windows desktop systems updated can be tricky, and finding missing patches in conventional ways might not be reliable.

There are a few reasons why important security patches might not get installed. They could be mistakenly declined in Windows Server Update Services or get overlooked in environments that a lack an internet connection.

Microsoft provides a Windows Update offline scan file, also known as wsusscn2.cab, to help you check Windows systems for missing updates. The CAB file contains information about most patches for Windows and Microsoft applications distributed through Windows Update.

The challenge with the wsusscn2.cab file is its size. It weighs in around 650 MB, and distributing it to all the servers to perform a scan can be tricky and time-consuming. This tutorial explains how to avoid those issues and run it on all of your servers in a secure and timely manner using IIS for file transfer instead of SMB or PowerShell sessions.

Requirements for offline scanning

There are some simple requirements to use this tutorial:

  • a server or PC running Windows Server 2012 or newer or Windows 10;
  • a domain account with local administrator on the servers you want to scan; and
  • PowerShell remoting enabled on the servers you want to scan.

Step 1. Install IIS

First, we need a web server we can use to distribute the wsusscn2.cab file. There are several ways to copy the file, but they all have different drawbacks.

For example, we could distribute the wsusscn2.cab file with a regular file share, but that requires a double-hop. You could also copy the wsusscn2.cab file over a PowerShell session, but that causes a lot of overhead and is extremely slow for large files. An easier and more secure way to distribute the file is through HTTP and IIS.

Installing on Windows Server

Start PowerShell as admin and type the following to install IIS:

Install-WindowsFeature -name Web-Server -IncludeManagementTools

Installing on Windows 10

Start PowerShell as an admin and type the following to install IIS:

Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebServer

The IIS role should be installed. The default site will point to the root folder of the C drive.

We can now proceed to download wsusscn2.cab from Microsoft.

Step 2. Download wsusscn2.cab

The link for this file can be tricky to find. You can either download it from this link and save it to the C drive or run the following script as admin on the IIS server:

# Default Site path, change if necessary
$IISFolderPath = "C:inetpubwwwroot"

# Download wsusscn2.cab
Start-BitsTransfer -Source "http://go.microsoft.com/fwlink/?linkid=74689" -Destination "$IISFolderPathwsusscn2.cab"

The script downloads the file to the wwwroot folder. We can verify the download by browsing to http:///wsusscn2.cab.

You also need to get the hash value of wsusscn2.cab to verify it. After saving it, run the following PowerShell command to check the file hash:

(Get-FileHash C:inetpubwwwrootwsusscn2.cab).Hash

31997CD01B8790CA68A02F3A351F812A38639FA49FEC7346E28F7153A8ABBA05

Step 3. Run the check on a server

Next, you can use a PowerShell script to download and scan for missing updates on a PC or server using the wsusscn2.cab file. You can run the script on at least Windows Server 2008 or newer to avoid compatibility issues. To do this in a secure and effective manner over HTTP, we get the file hash of the downloaded wsusscn2.cab file and compare it with the file hash of the CAB file on the IIS server.

We can also use the file hash to see when Microsoft releases a new version of wsusscn2.cab.

Copy and save the following script as Get-MissingUpdates.ps1:

Param(
    [parameter(mandatory)]
    [string]$FileHash,

    [parameter(mandatory)]
    [string]$Wsusscn2Url
)


Function Get-Hash($Path){
    
    $Stream = New-Object System.IO.FileStream($Path,[System.IO.FileMode]::Open) 
    
    $StringBuilder = New-Object System.Text.StringBuilder 
    $HashCreate = [System.Security.Cryptography.HashAlgorithm]::Create("SHA256").ComputeHash($Stream)
    $HashCreate | Foreach {
        $StringBuilder.Append($_.ToString("x2")) | Out-Null
    }
    $Stream.Close() 
    $StringBuilder.ToString() 
}

$DataFolder = "$env:ProgramDataWSUS Offline Catalog"
$CabPath = "$DataFolderwsusscn2.cab"

# Create download dir
mkdir $DataFolder -Force | Out-Null

# Check if cab exists
$CabExists = Test-Path $CabPath


# Compare hashes if download is needed
if($CabExists){
    Write-Verbose "Comparing hashes of wsusscn2.cab"
    
    $HashMatch = $Hash -ne (Get-Hash -Path $CabPath)

    if($HashMatch){   
        Write-Warning "Filehash of $CabPath did not match $($FileHash) - downloading"
        Remove-Item $CabPath -Force
    }
    Else{
        Write-Verbose "Hashes matched"
    }
}

# Download wsus2scn.cab if it dosen't exist or hashes mismatch
if(!$CabExists -or $HashMatch -eq $false){
    Write-Verbose "Downloading wsusscn2.cab"
    # Works on Windows Server 2008 as well
    (New-Object System.Net.WebClient).DownloadFile($Wsusscn2Url, $CabPath)

    if($Hash -ne (Get-Hash -Path $CabPath)){
        Throw "$CabPath did not match $($FileHash)"
    }

}

Write-Verbose "Checking digital signature of wsusscn2.cab"

$CertificateIssuer = "CN=Microsoft Code Signing PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US"
$Signature = Get-AuthenticodeSignature -FilePath $CabPath
$SignatureOk = $Signature.SignerCertificate.Issuer -eq $CertificateIssuer -and $Signature.Status -eq "Valid"


If(!$SignatureOk){
    Throw "Signature of wsusscn2.cab is invalid!"
}


Write-Verbose "Creating Windows Update session"
$UpdateSession = New-Object -ComObject Microsoft.Update.Session
$UpdateServiceManager  = New-Object -ComObject Microsoft.Update.ServiceManager 

$UpdateService = $UpdateServiceManager.AddScanPackageService("Offline Sync Service", $CabPath, 1) 

Write-Verbose "Creating Windows Update Searcher"
$UpdateSearcher = $UpdateSession.CreateUpdateSearcher()  
$UpdateSearcher.ServerSelection = 3
$UpdateSearcher.ServiceID = $UpdateService.ServiceID.ToString()
 
Write-Verbose "Searching for missing updates"
$SearchResult = $UpdateSearcher.Search("IsInstalled=0")

$Updates = $SearchResult.Updates

$UpdateSummary = [PSCustomObject]@{

    ComputerName = $env:COMPUTERNAME    
    MissingUpdatesCount = $Updates.Count
    Vulnerabilities = $Updates | Foreach {
        $_.CveIDs
    }
    MissingUpdates = $Updates | Select Title, MsrcSeverity, @{Name="KBArticleIDs";Expression={$_.KBArticleIDs}}
}

Return $UpdateSummary

Run the script on one of the servers of computers to check for missing updates. To do this, copy the script to the machine and run the script with the URL to the wsusscn2.cab on the IIS server and the hash value from step two:

PS51> Get-MissingUpdates.ps1 -Wsusscn2Url "http://
  
   /wsusscn2.cab" -FileHash 31997CD01B8790CA68A02F3A351F812A38639FA49FEC7346E28F7153A8ABBA05
  

If there are missing updates, you should see output similar to the following:

ComputerName     MissingUpdatesCount Vulnerabilities  MissingUpdates
------------     ------------------- ---------------  --------------
UNSECURESERVER                    14 {CVE-2006-4685, CVE-2006-4686,
CVE-2019-1079, CVE-2019-1079...} {@{Title=MSXML 6.0 RTM Security Updat

If the machine is not missing updates, then you should see this type of output:

ComputerName MissingUpdatesCount Vulnerabilities MissingUpdates
------------ ------------------- --------------- --------------
SECURESERVER                   0

The script gives a summary of the number of missing updates, what those updates are and the vulnerabilities they patch.

This process is a great deal faster than searching for missing updates online. But this manual method is not efficient when checking a fleet of servers, so let’s learn how to run the script on all systems and collect the output.

Step 4. Run the scanning script on multiple servers at once

The easiest way to collect missing updates from all servers with PowerShell is with a PowerShell job. The PowerShell jobs run in parallel on all computers, and you can fetch the results.

On a PC or server, save the file from the previous step to the C drive — or another directory of your choice — and run the following as a user with admin permissions on your systems:

# The servers you want to collect missing updates from
$Computers = @(
        'server1',
        'server2',
        'server3'
)

# These are the arguments that will be sent to the remote servers
$RemoteArgs = @(
    # File hash from step 2
    "31997CD01B8790CA68A02F3A351F812A38639FA49FEC7346E28F7153A8ABBA05",
    "http://$env:COMPUTERNAME/wsusscn2.cab"
)

$Params = @{
    ComputerName = $Computers
    ArgumentList = $RemoteArgs
    AsJob        = $True
    # Filepath to the script on the server/computer you are running this command on
    FilePath = "C:ScriptsGet-MissingUpdates.ps1"
    # Maximum number of active jobs
    ThrottleLimit = 20
}

$Job = Invoke-Command @Params

# Wait for all jobs to finish
$Job | Wait-Job

# Collect Results from the jobs
$Results = $Job | Receive-Job

# Show results
$Results

This runs the Get-MissingUpdates.ps1 script on all servers in the $Computers variable in parallel to save time and make it easier to collect the results.

You should run these PowerShell jobs regularly to catch servers with a malfunctioning Windows Update and to be sure important updates get installed.

Go to Original Article
Author:

Are containers on Windows the right choice for you?

It’s nearly the end of the road for Windows Server 2008/2008 R2. Some of the obvious migration choices are a newer version of Windows Server or moving the workload into Azure. But does a move to containers on Windows make sense?

After Jan. 14, 2020, Microsoft ends extended support for the Windows Server 2008 and 2008 R2 OSes, which also means no more security updates unless one enrolls in the Extended Security Update program. While Microsoft prefers that its customers move to the Azure cloud platform, another choice is to use containers on Windows.

Understand the two different virtualization technologies

If you are thinking about containerizing Windows Server 2008/2008 R2 workloads, then you need to consider the ways containers differ from a VM. The most basic difference is a container is much lighter than a VM. Whereas each VM has its own OS, containers share a base OS image. The container generally includes application binaries and anything else necessary to run the containerized application.

Containers share a common kernel, which has advantages and disadvantages. One advantage is containers can be extremely small in size. It is quite common for a container to be less than 100 MB, which enables them to be brought online very quickly. The low overhead of containers makes it possible to run far more containers than VMs on the host server.

However, containers share a common kernel. If the kernel fails, then all the containers that depend on it will also fail. Similarly, a poorly written application can destabilize the kernel, leading to problems with other containers on the system.

VMs vs. Docker

As a Windows administrator considering containerizing legacy Windows Server workloads, you need to consider the fundamental difference between VMs and containers. While containers do have their place, they are a poor choice for applications with high security requirements due to the shared kernel or for applications with a history of sporadic stability issues.

Another major consideration with containers is storage. Early on, containers were used almost exclusively for stateless workloads because containers could not store data persistently. Unlike a VM, shutting down a container deletes all data within the container.

Container technology has evolved to support persistent storage through the use of data volumes. Even so, it can be difficult to work with data volumes. Applications that have complex storage requirement usually aren’t a good fit for containerization. For example, database applications tend to be poor candidates for containerization due to complex storage configuration.

If you are used to managing physical or virtual Windows Server machines, you might think of setting up persistent storage as a migration specific task. While there is a requirement to provide a containerized application with the persistent storage that it needs, it’s a one-time task completed as part of the application migration. It is important to remember that containers are designed to be completely portable. A containerized application can move from a development and test environment to a production server or to a cloud host without the need to repackage the application. Setting up complex storage dependencies can undermine container portability; an organization will need to consider whether a newly containerized application will ever need to be moved to another location.

What applications are suited for containers?

As part of the decision-making process related to using containers on Windows, it is worth considering what types of applications are best suited for this type of deployment. Almost any application can be containerized, but the ideal candidate is a stateless application with varying scalability requirements. For example, a front-end web application is often an excellent choice for a containerized deployment for a few reasons. First, web applications tend to be stateless. Data is usually saved on a back-end database that is separate from the front-end application. Second, container platforms work well to meet an application’s scalability requirements. If a web application sees a usage spike, additional containers can instantly spin up to handle the demand. When the spike ebbs, it’s just a matter of deleting the containers.

Before migrating any production workloads to containers, the IT staff needs to develop the necessary expertise to deploy and manage containers. While container management is not usually overly difficult, it is completely different from VM management. Windows Server 2019 supports the use of Hyper-V containers, but you cannot use Hyper-V Manager to create, delete and migrate containers in the same way that you would perform these actions on a VM.

Containers are a product of the open source world and are therefore managed from the command line using Linux-style commands that are likely to be completely foreign to many Windows administrators. There are GUI-based container management tools, such as Kubernetes, but even these tools require some time and effort to understand. As such, having the proper training is essential to a successful container deployment.

Despite their growing popularity, containers are not an ideal fit for every workload. While some Windows Server workloads are good candidates for containerization, other workloads are better suited as VMs. As a general rule, organizations should avoid containerizing workloads that have complex, persistent storage requirements or require strict kernel isolation.

Go to Original Article
Author:

For VMware, DSC provides ESXi host and resource management

PowerShell Desired State Configuration has been a favorite among Windows infrastructure engineers for years, and the advent of the VMware DSC module means users who already use DSC to manage Windows servers can use it to manage VMware, too. As VMware has continued to develop the module, it has increased the numbers of vSphere components the tool can manage, including VMware Update Manager.

DSC has been the configuration management tool of choice for Windows since it was released. No other tool offers such a wide array of capabilities to manage a Windows OS in code instead of through a GUI.

VMware also uses PowerShell technology to manage vSphere. The vendor officially states that PowerCLI, its PowerShell module, is the best automation tool it offers. So, it only makes sense that VMware would eventually incorporate DSC so that its existing PowerShell customers can manage their assets in code.

Why use DSC?

Managing a machine through configuration as code is not new, especially in the world of DevOps. You can write a server’s desired state in code, which ensures you can quickly resolve any drift in configuration by applying that configuration frequently.

In vSphere, ESXi hosts, in particular, are the prime candidates for this type of management. An ESXi host’s configurations do not change often, and when they do happen to change, admins must personally make that change. This means any change in the DSC configuration will apply to the hosts.

You can use this tool to manage a number of vSphere components, such as VMware Update Manger and vSphere Standard Switch.

How the LCM works

In DSC, the LCM makes up the brains of a node.

In DSC, Local Configuration Manager (LCM) makes up the brains of a node. It takes in the configuration file and then parses and applies the change locally.

ESXi and vCenter do not have LCM, so in the context of vSphere, you must use an LCM proxy, which runs as a Windows machine with PowerShell v5.1 and PowerCLI 10.1.1.

Installing the module

Installing the module is simple, as the DSC module is part of PowerShell Gallery. It only takes a single cmdlet to install the module on your LCM proxy:

C:> Install-Module -Name VMware.vSphereDSC

Updating the module when Windows releases additional versions is also a simple task. You can use the Update-Module cmdlet in PowerCLI:

C:> Update-Module vmware.vspheredsc

Resources

DSC ties a resource to a particular area of a system it can manage. The DSC module vmware.vspheredsc, for example, can manage various aspects of vSphere, such as the following:

C:Usersdan> Get-DscResource -Module vmware.vspheredsc | Select NameName
----
Cluster
Datacenter
DatacenterFolder
DrsCluster
Folder
HACluster
PowerCLISettings
vCenterSettings
vCenterStatistics
VMHostAccount
VMHostDnsSettings
VMHostNtpSettings
VMHostSatpClaimRule
VMHostService
VMHostSettings
VMHostSyslog
VMHostTpsSettings
VMHostVss
VMHostVssBridge
VMHostVssSecurity
VMHostVssShaping
VMHostVssTeaming

Many such resources are associated with ESXi hosts. You can manage settings such as accounts, Network Time Protocol and service through DSC. For clusters, manage settings such as HAEnabled, Distributed Resource Scheduler and DRS distribution. You can view the resources DSC can manage with the Get-DSCResource cmdlet:

C:> Get-DscResource -Name Cluster -Module vmware.vspheredsc -Syntax
Cluster [String] #ResourceName
{
[DependsOn = [String[]]]
[PsDscRunAsCredential = [PSCredential]]
Server = [String]
Credential = [PSCredential]
Name = [String]
Location = [String] DatacenterName = [String]
DatacenterLocation = [String]
Ensure = [String]
[HAEnabled = [Boolean]]
[HAAdmissionControlEnabled = [Boolean]]
[HAFailoverLevel = [Int32]]
[HAIsolationResponse = [String]]
[HARestartPriority = [String]]
[DrsEnabled = [Boolean]]
[DrsAutomationLevel = [String]]
[DrsMigrationThreshold = [Int32]]
[DrsDistribution = [Int32]]
[MemoryLoadBalancing = [Int32]]
[CPUOverCommitment = [Int32]]
}

With the capabilities of DSC now available to VMware admins, as well as Windows admins, they can control a variety of server variables through code and make vSphere and vCenter automation easy and accessible. They can apply broad changes across an entire infrastructure of hosts and ensure consistent configuration.

Go to Original Article
Author:

How to install and test Windows Server 2019 IIS

Transcript – How to install and test Windows Server 2019 IIS

In this video, I want to show you how to install Internet Information Services, or IIS, and prepare it for use.

I’m logged into a domain-joined Windows Server 2019 machine and I’ve got the Server Manager open. To install IIS, click on Manage and choose the Add Roles and Features option. This launches the Add Roles and Features wizard. Click Next on the welcome screen and choose role-based or feature-based installation for the installation type and click Next.

Make sure that My Server is selected and click Next. I’m prompted to choose the roles that I want to deploy. We have an option for web server IIS. That’s the option I’m going to select. When I do that, I’m prompted to install some dependency features, so I’m going to click on Add Features and I’ll click Next.

I’m taken to the features screen. All the dependency features that I need are already being installed, so I don’t need to select anything else. I’ll click Next, Next again, Next again on the Role Services — although if you do need to install any additional role services to service the IIS role, this is where you would do it. You can always enable these features later on, so I’ll go ahead and click Next.

I’m taken to the Confirmation screen and I can review my configuration selections. Everything looks good here, so I’ll click install and IIS is being installed.

Testing Windows Server 2019 IIS

The next thing that I want to do is test IIS to make sure that it’s functional. I’m going to go ahead and close this out and then go to local server. I’m going to go to IE Enhanced Security Configuration. I’m temporarily going to turn this off just so that I can test IIS. I’ll click OK and I’ll close Server Manager.

The next thing that I want to do is find this machine’s IP address, so I’m going to right-click on the Start button and go to Run and type CMD to open a command prompt window, and then from there, I’m going to type ipconfig.

Here I have the server’s IP address, so now I can open up an Internet Explorer window and enter this IP address and Internet Information Services should respond. I’ve entered the IP address, then I press enter and I’m taken to the Internet Information Services screen. IIS is working at this point.

I’ll go ahead and close this out. If this were a real-world deployment, one of the next things that you would probably want to do is begin uploading some of the content that you’re going to use on your website so that you can begin testing it on this server.

I’ll go ahead and open up file explorer and I’ll go to this PC, driver and inetpub folder and the wwwroot subfolder. This is where you would copy all of your files for your website. You can configure IIS to use a different folder, but this is the one used by default for IIS content. You can see the files right here that make up the page that you saw a moment ago.

How to work with the Windows Server 2019 IIS bindings

Let’s take a look at a couple of the configuration options for IIS. I’m going to go ahead and open up Server Manager and what I’m going to do now is click on Tools, and then I’m going to choose the Internet Information Services (IIS) Manager. The main thing that I wanted to show you within the IIS Manager is the bindings section. The bindings allow traffic to be directed to a specific website, so you can see that, right now, we’re looking at the start page and, right here, is a listing for my IIS server.

I’m going to go ahead and expand this out and I’m going to expand the site’s container and, here, you can see the default website. This is the site that I’ve shown you just a moment ago, and then if we look over here on the Actions menu, you can see that we have a link for Bindings. When I open up the Bindings option, you can see by default we’re binding all HTTP traffic to port 80 on all IP addresses for the server.

We can edit [the site bindings] if I select [the site] and click on it. You can see that we can select a specific IP address. If the server had multiple IP addresses associated with it, we could link a different IP address to each site. We could also change the port that’s associated with a particular website. For example, if I wanted to bind this particular website to port 8080, I could do that by changing the port number. Generally, you want HTTP traffic to flow on port 80. The other thing that you can do here is to assign a hostname to the site, for example www.contoso.com or something to that effect.

The other thing that I want to show you in here is how to associate HTTPS traffic with a site. Typically, you’re going to have to have a certificate to make that happen, but assuming that that’s already in place, you click on Add and then you would change the type to HTTPS and then you can choose an IP address; you can enter a hostname; and then you would select your SSL certificate for the site.

You’ll notice that the port number is set to 443, which is the default port that’s normally used for HTTPS traffic. So, that’s how you install IIS and how you configure the bindings for a website.

+ Show Transcript

Go to Original Article
Author:

December Patch Tuesday resolves Windows zero-day

Administrators got an early holiday present with a fairly light patching workload on December Patch Tuesday, but they will have one Windows zero-day to wrap up as soon as possible.

Microsoft corrected 36 vulnerabilities on December Patch Tuesday in Microsoft Windows, Internet Explorer, Microsoft Office and Microsoft Office Services and Web Apps, SQL Server, Visual Studio and Skype for Business.

The Win32k elevation of privilege vulnerability (CVE-2019-1458) is rated as important and is being actively exploited in the wild. This Windows zero-day, discovered by Kaspersky Lab researchers, affects most supported versions of Microsoft’s operating system on both the client and server side. The attacker needs authentication to access the system to run malicious code in kernel mode to take control of a system. The attacker could then perform a range of tasks, including create new accounts with full user rights and install programs.

Administrators who base their patching priority on a combination of vendor severity and the Common Vulnerability Scoring System (CVSS) score might miss these types of vulnerabilities if they don’t account for additional factors. CVE-2019-1458 has a CVSS score of 7.8.

Chris Goettl, director of product management and security, IvantiChris Goettl

“If you’re not patching vulnerabilities rated important and above with a CVSS score lower than 8.0, then you could miss things being actively exploited,” said Chris Goettl, director of product management and security at Ivanti, a security and IT management vendor based in South Jordan, Utah.

If you’re not patching vulnerabilities rated important and above with a CVSS score lower than 8.0, then you could miss things being actively exploited.
Chris GoettlDirector of product management and security, Ivanti

Multiple zero-day bugs this year met the same criteria as this most recent Windows zero-day, so companies need to make sure they examine additional metadata with the vulnerabilities as they formulate their patching prioritization, Goettl said.

Microsoft closes multiple Git-Visual Studio flaws

Microsoft resolved several security issues related to Git functionality and Microsoft Visual Studio. The company corrected six bugs (CVE-2019-1349, CVE-2019-1350, CVE-2019-1351, CVE-2019-1352, CVE-2019-1354 and CVE-2019-1387) with most of them remote-code execution flaws rated critical. Companies that use Visual Studio for software development and connect to Git repositories will want to apply the December Patch Tuesday updates in short order.

Goettl said an enterprising attacker could do some investigative work to gather email addresses for developers in an organization, then construct a spear-phishing campaign to direct developers to a malicious repository that appears legitimate. Then the attacker could gain administrative rights to modify code in the organization’s development environment. While this scenario is theoretical, Goettl said, it’s not that far-fetched.

“Vendors are becoming a significant target as a way to attack many companies,” he said

Goettl cited a recent incident in which about 400 dentist offices were hit with ransomware through a vendor that handled data backups for the offices. Threat actors have learned that hitting multiple targets at once is a more effective and lucrative option than the piecemeal approach used in the early days of ransomware, he said.

“If I’m an attacker and I find a vertical that I want to go after, such as a vendor for a bunch of health care providers, and I get as much intel as I can about them, their developers and their development platform and any information about their repositories, then I could put together a valid spear-phishing attack,” Goettl said. “And if I can get into their code base, then I can construct an attack that hits all of their customers and makes for a more painful and more profitable ransomware scenario.”

Microsoft issues more servicing stack updates

Microsoft issued an advisory (ADV990001) related to servicing stack updates for Windows 7, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012. The first three operating systems will leave mainstream support after the next Patch Tuesday, on Jan. 14, 2020, and move into the extended support phase. Companies that have not migrated off these legacy systems will need to sign up for extended security updates (ESU) to receive support when the end-of-life (EOL) date passes.

“My guess is that these servicing stack updates for these older platforms are preparing for that switchover so the systems can check and get continued updates if you’ve got that [ESU] key,” Goettl said.

A recent survey from Ivanti indicates many organizations have lagged in their Windows 7 migration efforts for several reasons, such as lack of time and lack of application support. In the survey of more than 500 IT professionals published in October, 39% of the respondents indicated they would not have completed migrations off of Windows 7 before the EOL date.

Microsoft makes unusual move with bug for unsupported OS 

In an unusual twist, Microsoft released information for a Remote Desktop Protocol information disclosure vulnerability (CVE-2019-1489) rated important for Windows XP — an unsupported Windows operating system — but did not provide a patch.

“This one was kind of odd,” said Goettl, who noted Microsoft gave the CVE an exploitability assessment of 0, which typically means it is an actively exploited vulnerability, but Microsoft modified the designation to read “0 – unknown.”

“Microsoft took the time to create a CVE advisory for Windows XP. We can assume there was a reason to trigger that activity, ” Goettl said. “There is no update available, so people really need to get off XP unless they have an absolute necessity to keep it around.”

An attacker could exploit this flaw by connecting remotely to an XP system and running a specially crafted program. The Windows XP operating system went out of mainstream support in April 2009 and left extended support in April 2014.

Other security updates of note for December Patch Tuesday include:

  • A fix for a spoofing flaw (CVE-2019-1490) rated important for Skype for Business Server 2019, cumulative update 2. In the attack scenario, a user would have to click on a malicious link to a server that has been exploited. The threat actor could then launch cross-site scripting attacks on affected systems and run code in the security context of the exploited user. Microsoft’s update closes the loophole by properly sanitizing web requests.
  • A patch for a Win32k graphics remote code execution vulnerability (CVE-2019-1468) rated critical for supported Windows client and server operating systems related to improper handling of specially crafted embedded fonts. An attacker who exploits this bug — by getting a user to click on a link to a malicious site or open a specially crafted document — can take control of a system and run tasks based on the privilege level of the affected user. The update corrects how the Windows font library handles embedded fonts.
  • A fix for a Hyper-V remote code execution vulnerability (CVE-2019-1471) rated critical on Windows 10 and Windows Server 2019 that could allow an attacker to run a malicious application on a guest operating system to force the Hyper-V host to run arbitrary code. The update corrects the user input validation process.
  • A correction for a cross-site scripting vulnerability (CVE-2019-1332) rated important in the Microsoft SQL Server Reporting Services (SSRS) feature. To trigger the exploit, an authenticated user would need to click on a malicious link to an affected SSRS server. The attacker could then perform a range of tasks from deleting content to running malicious code. The patch corrects SSRS URL sanitization.
  • A patch for an information disclosure vulnerability (CVE-2019-1400) rated important in Microsoft Access related to a failure to handle objects in memory properly. The attacker would need to be authenticated on the system to run a malicious application to gather information on the user’s system.

Adobe and Google also release patches

Google updated its Chrome web browser on Tuesday to version 79, resolving 51 vulnerabilities.

Adobe released updates for Adobe Acrobat Reader, Flash Player, Photoshop, Brackets and ColdFusion. Administrators will want to patch Acrobat Reader to close 21 vulnerabilities, 14 of which are rated critical. The company released an update for Flash Player but not for security reasons.

“Adobe is really winding down their focus on Adobe Flash,” Goettl said. “I think it’s safe to say that rather than it just no longer being vulnerable, Adobe is putting so little effort into it that it’s not getting attention anymore. People should be focused on getting Flash Player out of their environments.”

Go to Original Article
Author: