Category Archives: Expert advice on Windows based systems and hardware

Expert advice on Windows based systems and hardware

Microsoft patches two Windows zero-days in July Patch Tuesday

The July 2019 Patch Tuesday release included fixes for 77 vulnerabilities, two of which were Windows zero-days that were actively exploited in the wild.

The two Windows zero-days are both local escalation-of-privilege flaws that cannot be used alone to perform an attack. One zero-day, CVE-2019-0880, is a flaw in how splwow64.exe handles certain calls. The issue affects Windows 8.1, Windows 10 and Windows Server 2012, 2016 and 2019.

“This vulnerability by itself does not allow arbitrary code execution; however, it could allow arbitrary code to be run if the attacker uses it in combination with another vulnerability that is capable of leveraging the elevated privileges when code execution is attempted,” according to Microsoft.

The other Windows zero-day the vendor patched was CVE-2019-1132, which caused the Win32k component to improperly handle objects in memory. This issue affects Windows 7 and Windows Server 2008.

“To exploit this vulnerability, an attacker would first have to log on to the system,” Microsoft noted. “An attacker could then run a specially crafted application that could exploit the vulnerability and take control of an affected system.”

This zero-day was reported to Microsoft by ESET. Anton Cherepanov, senior malware researcher for ESET, detailed a highly targeted attack in Eastern Europe and recommended upgrading systems as the best remediation against attacks.

“The exploit only works against older versions of Windows, because since Windows 8 a user process is not allowed to map the NULL page. Microsoft back-ported this mitigation to Windows 7 for x64-based systems,” Cherepanov wrote in a blog post. “People who still use Windows 7 for 32-bit systems Service Pack 1 should consider updating to newer operating systems, since extended support of Windows 7 Service Pack 1 ends on January 14th, 2020. Which means that Windows 7 users won’t receive critical security updates. Thus, vulnerabilities like this one will stay unpatched forever.”

Other patches

Beyond the two Windows zero-days patched this month, there were six vulnerabilities patched that had been publicly disclosed, but no attacks were seen in the wild. The disclosures could potentially aid attackers in exploiting the issues faster, so enterprises should prioritize the following:

  • CVE-2018-15664, a Docker flaw in the Azure Kubernetes Service;
  • CVE-2019-0962, an Azure Automation escalation-of-privilege flaw;
  • CVE-2019-0865, a denial-of-service flaw in SymCrypt;
  • CVE-2019-0887, a remote code execution (RCE) flaw in Remote Desktop Services;
  • CVE-2019-1068, an RCE flaw in Microsoft SQL Server; and
  • CVE-2019-1129, a Windows escalation-of-privilege flaw.

The Patch Tuesday release also included 15 vulnerabilities rated critical by Microsoft. Some standout patches in that group included CVE-2019-0785, a DHCP Server RCE issue, and four RCE issues affecting Microsoft browsers, which Trend Micro labeled as noteworthy — CVE-2019-1004, CVE-2019-1063, CVE-2019-1104 and CVE-2019-1107.

Go to Original Article
Author:

How to deal with the on-premises vs. cloud challenge

For some administrators, the cloud is not a novelty. It’s critical to their organization. Then, there’s you, the lone on-premises holdout.

With all the hype about cloud and Microsoft’s strong push to get IT to use Azure for services and workloads, it might seem like you are the only one in favor of remaining in the data center in the great on-premises vs. cloud debate. The truth is the cloud isn’t meant for everything. While it’s difficult to find a workload not supported by the cloud, that doesn’t mean everything needs to move there.

Few people like change, and a move to the cloud is a big adjustment. You can’t stop your primary vendors from switching their allegiance to the cloud, so you will need to be flexible to face this new reality. Take a look around at your options as more vendors narrow their focus away from the data center and on-premises management.

Is the cloud a good fit for your organization?

The question is: Should it be done? All too often, it’s a matter of money. For example, it’s possible to take a large-capacity file server in the hundreds of terabytes and place it in Azure. Microsoft’s cloud can easily support this workload, but can your wallet?

Once you get over the sticker shock, think about it. If you’re storing frequently used data, it might make business sense to put that file server in Azure. However, if this is a traditional file server with mostly stale data, then is it really worth the price tag as opposed to using on-premises hardware?

Azure file server
When you run the numbers on what it takes to put a file server in Azure, the costs can add up.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks. Part of the calculation in determining what makes sense in an operational budget structure, as opposed to a capital expense, is the people factor. Too often, admins find themselves in a situation where management sees one side of this formula and wants to make that cloud leap, while the admins must look at the reality and explain both the pros and cons — the latter of which no one wants to hear.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks.

The cloud question also goes deeper than the Capex vs. Opex argument for the admins. With so much focus on the cloud, what happens to those environments that simply don’t or can’t move? It’s not only a question of what this means today, but also what’s in store for them tomorrow.

As vendors move on, the walls close in

With the focus for most software vendors on cloud and cloud-related technology, the move away from the data center should be a warning sign for admins that can’t move to the cloud. The applications and tools you use will change to focus on the organizations working in the cloud with less development on features that would benefit the on-premises data center.

One of the most critical aspects of this shift will be your monitoring tools. As cloud gains prominence, it will get harder to find tools that will continue to support local Windows Server installations over cloud-based ones. We already see this trend with log aggregation tools that used to be available as on-site installs that are now almost all SaaS-based offerings. This is just the start.

If a tool moves from on premises to the cloud but retains the ability to monitor data center resources, that is an important distinction to remember. That means you might have a workable option to keep production workloads on the ground and work with the cloud as needed or as your tools make that transition.

As time goes on, an evaluation process might be in order. If your familiar tools are moving to the cloud without support for on-premises workloads, the options might be limited. Should you pick up new tools and then invest the time to install and train the staff how to use them? It can be done, but do you really want to?

While not ideal, another viable option is to take no action; the install you have works, and as long as you don’t upgrade, everything will be fine. The problem with remaining static is getting left behind. The base OSes will change, and the applications will get updated. But, if your tools can no longer monitor them, what good are they? You also introduce a significant security risk when you don’t update software. Staying put isn’t a good long-term strategy.

With the cloud migration will come other choices

The same challenges you face with your tools also apply to your traditional on-premises applications. Longtime stalwarts, such as Exchange Server, still offer a local installation, but it’s clear that Microsoft’s focus for messaging and collaboration is its Office 365 suite.

The harsh reality is more software vendors will continue on the cloud path, which they see as the new profit centers. Offerings for on-premises applications will continue to dwindle. However, there is some hope. As the larger vendors move to the cloud, it opens up an opportunity in the market for third-party tools and applications that might not have been on your radar until now. These products might not be as feature-rich as an offering from the larger vendors, but they might tick most of the checkboxes for your requirements.

Go to Original Article
Author:

How to manage Windows with Puppet

IT pros have long aligned themselves with either Linux or Windows, but it has grown increasingly common for organizations to seek the best of both worlds.

For traditional Windows-only shops, the thought of managing Windows systems with a server-side tool made for Linux may be unappealing, but Puppet has increased Windows Server support over the years and offers capabilities that System Center Configuration Manager and Desired State Configuration do not.

Use existing Puppet infrastructure

Many organizations use Puppet to manage Linux systems and SCCM to manage Windows Servers. SCCM works well for managing workstations, but admins could manage Windows more easily with Puppet code. For example, admins can easily audit a system configuration by looking at code manifests.

Admins manage Windows with Puppet agents installed on Puppet nodes. They use modules and manifests to deploy node configurations. If admins manage both Linux and Windows systems with Puppet, it provides a one-stop shop for all IT operations.

Combine Puppet and DSC for greater support

Admins need basic knowledge of Linux to use a Puppet master service. They do not need to have a Puppet master because they can write manifests on nodes and apply them, but that is likely not a scalable option. For purely Windows-based shops, training in both Linux and Puppet will make taking the Puppet plunge easier. It requires more time to set up and configure Windows systems in Puppet the same way they would be configured in SCCM. Admins should design the code before users start writing and deploying Puppet manifests or DevOps teams add CI/CD pipelines.

SCCM works well for managing workstations, but admins could more easily manage Windows with Puppet code.

DSC is one of the first areas admins look to manage Windows with Puppet code. The modules are written in C# or PowerShell. DSC has native monitoring GUI, which makes the overall view of a machine’s configuration complex. In its enterprise version, Puppet has native support for web-based reporting. Admins can also use a free open source version, such as Foreman.

Due to the number of community modules available on the PowerShell Gallery, DSC receives the most Windows support for code-based management, but admins can combine Puppet with DSC to get complete coverage for Windows management. Puppet contains native modules and a DSC module with PowerShell DSC modules built in. Admins may also use the dsc_lite module, which can use almost any DSC module available in Puppet. The dsc_lite modules are maintained outside of Puppet completely.

How to use Puppet to disable services

Administrators can use Puppet to run and disable services. Using native Puppet support without a DSC Puppet module, admins could write a manifest to always have the net logon, BITS and W3SVC running when a Puppet run completes. Place the name of each Windows service in a Puppet array $svc_name.

$svc_name  = [‘netlogon’,’BITS’,’W3SVC’]


   service { $svc_name:
   

   ensure => ‘running’


}

In the next example, the Puppet DSC module ensures that the web server Windows feature is installed on the node and reboots if a pending reboot is required.

dsc_windowsfeature {‘webserverfeature’:

  dsc_ensure = ‘present’

  dsc_name = ‘Web-Server’

}

reboot { ‘dsc_reboot’ :

  message => Puppet needs to reboot now’,

  when    => ‘pending’,

  onlyif  => ‘pending_dsc_reboot’,

}

Go to Original Article
Author:

Try these PowerShell networking commands to stay connected

While it would be nice if they did, servers don’t magically stay online on their own.

Servers go offline for a lot of reasons; it’s your job to find a way to determine network connectivity to these servers quickly and easily. You can use PowerShell networking commands, such as the Test-Connection and Test-NetConnection cmdlets to help.

The problem with ping

For quite some time, system administrators used ping to test network connectivity. This little utility sends an Internet Control Message Protocol message request to an endpoint and listens for an ICMP reply.

ping test
The ping utility runs a fairly simple test to check for a response from a host.

Because ping only tests ICMP, this limits its effectiveness to fully test a connection. Another caveat: The Windows firewall blocks ICMP requests by default. If the ICMP request doesn’t reach the server in question, you’ll get a false negative which makes ping results irrelevant.

The Test-Connection cmdlet offers a deeper look

We need a better way to test server network connectivity, so let’s use PowerShell instead of ping. The Test-Connection cmdlet also sends ICMP packets but it uses Windows Management Instrumentation which gives us more granular results. While ping returns text-based output, the Test-Connection cmdlet returns a Win32_PingStatus object which contains a lot of useful information.

The Test-Connection command has a few different parameters you can use to tailor your query to your liking, such as changing the buffer size and defining the number of seconds between the pings. The output is the same but the request is a little different.

Test-Connection www.google.com -Count 2 -BufferSize 128 -Delay 3

You can use Test-Connection to check on remote computers and ping a remote computer as well, provided you have access to those machines. The command below connects to the SRV1 and SRV2 computers and sends ICMP requests from those computers to www.google.com:

Test-Connection -Source 'SRV2', 'SRV1' -ComputerName 'www.google.com'

Source Destination IPV4Address IPV6Address
Bytes Time(ms)

------ ----------- ----------- -----------
----- --------

SRV2 google.com 172.217.7.174
32 5

SRV2 google.com 172.217.7.174
32 5

SRV2 google.com 172.217.7.174
32 6

SRV2 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

If the output is too verbose, and you just want a simple result, use the Quiet parameter.

Test-Connection -ComputerName google.com -Quiet
True

For more advanced network checks, try the Test-NetConnection cmdlet

If simple ICMP requests aren’t enough to test network connectivity, PowerShell also provides the Test-NetConnection cmdlet. This cmdlet is the successor to Test-Connection and goes beyond ICMP to check network connectivity.

For basic use, Test-NetConnection just needs a value for the ComputerName parameter and will mimic Test-Connection‘s behavior.

Test-NetConnection -ComputerName www.google.com

ComputerName : www.google.com
RemoteAddress : 172.217.9.68
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
PingSucceeded : True
PingReplyDetails (RTT) : 34 ms

Test-NetConnection has advanced capabilities and can test for open ports. The example below will check to see if port 80 is open:

Test-NetConnection -ComputerName www.google.com -Port 80

ComputerName : google.com
RemoteAddress : 172.217.5.238
RemotePort : 80
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
TcpTestSucceeded : True

The boolean TcpTestSucceeded returns True to indicate port 80 is open.

We can also use the TraceRoute parameter with the Test-NetConnection cmdlet to check the progress of packets to the destination address.

Test-NetConnection -ComputerName google.com -TraceRoute

ComputerName : google.com
RemoteAddress : 172.217.5.238
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
PingSucceeded : True
PingReplyDetails (RTT) : 44 ms
TraceRoute : 192.168.86.1
192.168.0.1
142.254.146.117
74.128.4.113
65.29.30.36
65.189.140.166
66.109.6.66
66.109.6.30
107.14.17.204
216.6.87.149
72.14.198.28
108.170.240.97
216.239.54.125
172.217.5.238

If you dig into the help for the Test-NetConnection cmdlet, you’ll find it has quite a few parameters to test many different situations.

Go to Original Article
Author:

What are the steps for an Exchange certificate renewal?

An expired Exchange certificate can bring your messaging platform to a halt, but it’s easy enough to check and replace the expired certificate.

When mail stops flowing, Outlook access breaks and the Exchange Management Console/Shell gives errors, then it might be time to see if an Exchange certificate renewal is in order.

Exchange adds a certificate by default with your protocols during its installation, including Simple Mail Transfer Protocol and Internet Information Services (IIS). Many companies do not allow access to Outlook on the web, so mail is only accessible internally. This limits the Exchange Server capabilities as Microsoft designed it to be accessible from anywhere on any device.

For companies that choose to limit Exchange’s functionality, the IT staff often opts to use the default certificate, which has a five-year life span. In five years, IT might forget about the Exchange certificate renewal until they receive countdown emails warning that it will expire. If nobody sees these emails and the certificate expires, then problems will start, as Exchange services that require a valid certificate might not work.

To check a certificate’s status, run the following PowerShell command:

Get-ExchangeCertificate | fl

Assign a new certificate for Exchange 2010

If Exchange breaks due to an expired certificate, then you might want to push for a quick fix by issuing a certificate to an internal certificate authority. This won’t work because the certificate authority will not sign the certificate.

If you start to panic as help desk tickets start to flood in, this is when trouble typically happens. You might try to adjust the settings in IIS, but this can break Exchange. However, the fix is simple.

Run the New-ExchangeCertificate command to initiate the Exchange certificate renewal process. This PowerShell cmdlet will create a new self-signed certificate for Exchange 2010. The command prompts you to replace the existing certificate. Click Yes to proceed.

Exchange certificate replacement
Execute the PowerShell New-ExchangeCertificate cmdlet to build a new self-signed certificate for Exchange 2010.

Next, assign the services from the old certificate to the new one and perform an IISReset from an elevated command prompt to get Exchange services running again.

Finally, ensure the bindings in IIS are set to use the new certificate.

Explore the Cubic congestion control provider for Windows

Administrators may not be familiar with the Cubic congestion control provider, but Microsoft’s move to make this the default setting in the Windows networking stack means IT will need to learn how it works and how to manage it.

When Microsoft released Windows Server version 1709 in its Semi-Annual Channel, the company introduced a number of features, such as support for data deduplication in the Resilient File System and support for virtual network encryption.

Microsoft also made the Cubic algorithm the default congestion control provider for that version of Windows Server. The most recent preview builds of Windows 10 and Windows Server 2019 (Long-Term Servicing Channel) also enable Cubic by default.

Microsoft added Cubic to Windows Server 2016, as well, but it calls this implementation an experimental feature. Due to this disclaimer, administrators should learn how to manage Cubic if unexpected behavior occurs.

Why Cubic matters in today’s data centers

Congestion control mechanisms improve performance by monitoring packet loss and latency and making adjustments accordingly. TCP/IP limits the size of the congestion window and then gradually increases the window size over time. This process stops when the maximum receive window size is reached or packet loss occurs. However, this method hasn’t aged well with the advent of high-bandwidth networks.

For the last several years, Windows has used Compound TCP as its standard congestion control provider. Compound TCP increases the size of the receive window and the volume of data sent.

Cubic, which has been the default congestion provider for Linux since 2006, is a protocol that improves traffic flow by keeping track of congestion events and dynamically adjusting the congestion window.

A Microsoft blog on the networking features in Windows Server 2019 said Cubic performs better over a high-speed, long-distance network because it accelerates to optimal speed more quickly than Compound TCP.

Enable and disable Cubic with netsh commands

Microsoft added Cubic to later builds of Windows Server 2016. You can use the following PowerShell command to see if Cubic is in your build:

Get-NetTCPSetting| Select-Object SettingName, CcongestionProvider

Technically, Cubic is a TCP/IP add-on. Because PowerShell does not support Cubic yet, admins must enable it in Windows Server 2016 from the command line with the netsh command from an elevated command prompt.

Netsh uses the concepts of contexts and subcontexts to configure many aspects of Windows Server’s networking stack. A context is similar to a mode. For example, the netsh firewall command places netsh in a firewall context, which means that the utility will accept firewall-related commands.

Microsoft added Cubic-related functionality into the netsh interface context. The interface context — abbreviated as INT in some Microsoft documentation — provides commands to manage the TCP/IP protocol.

Prior to Windows Server 2012, admins could make global changes to the TCP/IP stack by referencing the desired setting directly. For example, if an administrator wanted to use the Compound TCP congestion control provider — which was the congestion control provider since Windows Vista and Windows Server 2008 — they could use the following command:

netsh int tcp set global congestionprovider=ctcp

Newer versions of Windows Server use netsh and the interface context, but Microsoft made some syntax changes in Windows Server 2012 that carried over to Windows Server 2016. Rather than setting values directly, Windows Server 2012 and Windows Server 2016 use supplemental templates.

In this example, we enable Cubic in Windows Server 2016:

netsh int tcp set supplemental template=internet congestionprovider=cubic

This command launches netsh, switches to the interface context, loads the Internet CongestionProvider template and sets the congestion control provider to Cubic. Similarly, we can switch from the Cubic provider to the default Compound congestion provider with the following command:

netsh int tcp set supplemental template=internet congestionprovider=compound

Microsoft shuts down zero-day exploit on September Patch Tuesday

Microsoft shut down a zero-day vulnerability launched by a Twitter user in August and a denial-of-service flaw on September Patch Tuesday.

A security researcher identified by the Twitter handle SandboxEscaper shared a zero-day exploit in the Windows task scheduler on Aug. 27. Microsoft issued an advisory after SandboxEscaper uploaded proof-of-concept code on GitHub. The company fixed the ALPC elevation of privilege vulnerability (CVE-2018-8440) with its September Patch Tuesday security updates. A malicious actor could use the exploit to gain elevated privileges in unpatched Windows systems.

“[The attacker] can run arbitrary code in the context of local system, which pretty much means they own the box … that one’s a particularly nasty one,” said Chris Goettl, director of product management at Ivanti, based in South Jordan, Utah.

The vulnerability requires local access to a system, but the public availability of the code increased the risk. An attacker used the code to send targeted spam that, if successful, implemented a two-stage backdoor on a system.

“Once enough public information gets out, it may only be a very short period of time before an attack could be created,” Goettl said. “Get the Windows OS updates deployed as quickly as possible on this one.”

Microsoft addresses three more public disclosures

Administrators should prioritize patching three more public disclosures highlighted in September Patch Tuesday.

Microsoft resolved a denial-of-service vulnerability (CVE-2018-8409) with ASP.NET Core applications. An attacker could cause a denial of service with a specially crafted request to the application. Microsoft fixed the framework’s web request handling abilities, but developers also must build the update into the vulnerable application in .NET Core and ASP.NET Core.

Chris Goettl of IvantiChris Goettl

A remote code execution vulnerability (CVE-2018-8457) in the Microsoft Scripting Engine opens the door to a phishing attack, where an attacker uses a specially crafted image file to compromise a system and execute arbitrary code. A user could also trigger the attack if they open a specially constructed Office document.

“Phishing is not a true barrier; it’s more of a statistical challenge,” Goettl said. “If I get enough people targeted, somebody’s going to open it.”

This exploit is rated critical for Windows desktop systems using Internet Explorer 11 or Microsoft Edge. Organizations that practice least privilege principles can mitigate the impact of this exploit.

Another critical remote code execution vulnerability in Windows (CVE-2018-8475) allows an attacker to send a specially crafted image file to a user, who would trigger the exploit if they open the file.

September Patch Tuesday issues 17 critical updates

September Patch Tuesday addressed more than 60 vulnerabilities, 17 rated critical, with a larger number focused on browser and scripting engine vulnerabilities.

“Compared to last month, it’s a pretty mild month. The OS and browser updates are definitely in need of attention,” Goettl said.

Microsoft closed two critical remote code execution flaws (CVE-2018-0965 and CVE-2018-8439) in Hyper-V and corrected how the Microsoft hypervisor validates guest operating system user input. On an unpatched system, an attacker could run a specially crafted application on a guest operating system to force the Hyper-V host to execute arbitrary code.

Microsoft also released an advisory (ADV180022) for administrators to protect Windows systems from a denial-of-service vulnerability named “FragmentSmack” (CVE-2018-5391). An attacker can use this exploit to target the IP stack with eight-byte IP fragments and withholding the last fragment to trigger full CPU utilization and force systems to become unresponsive.

Microsoft also released an update to a Microsoft Exchange 2010 remote code execution vulnerability (CVE-2018-8154) first addressed on May Patch Tuesday. The fix corrects the faulty update that could break functionality with Outlook on the web or the Exchange Control Panel. 

“This might catch people by surprise if they are not looking closely at all the CVEs this month,” Goettl said.

PowerShell commands to copy files: Basic to advanced methods

Copying files between folders, drives and machines is a common administrative task that PowerShell can simplify. Administrators who understand the parameters associated with the Copy-Item commands and how they work together will get the most from the PowerShell commands to copy files.

PowerShell has providers — .NET programs that expose the data in a data store for viewing and manipulation — and a set of common cmdlets that work across providers. These include the *-Item, *-ItemProperty, *-Content, *-Path and *-Location cmdlets. Therefore, you can use the Copy-Item cmdlet to copy files, Registry keys and variables.

The example in the following command uses variable $a:

Copy-Item -Path variable:a -Destination variable:aa

When working with databases, administrators commonly use transactions — one or more commands treated as a unit — so the commands either all work or they all roll back. PowerShell transactions are only supported by the Registry provider, so the UseTransaction parameter on Copy-Item doesn’t do anything. The UseTransaction parameter is part of Windows PowerShell v2 through v5.1, but not in the open source PowerShell Core.

When working with databases, administrators commonly use transactions — one or more commands treated as a unit — so the commands either all work or they all roll back.

PowerShell has a number of aliases for its major cmdlets. Copy-Item uses three aliases.

Get-Alias -Definition copy-item

CommandType     Name           Version    Source

———–     —-            ——-    ——

Alias           copy -> Copy-Item

Alias           cp -> Copy-Item

Alias           cpi -> Copy-Item

These aliases only exist on Windows PowerShell to prevent a conflict with native Linux commands for PowerShell Core users.

Ways to use PowerShell commands to copy files

To show how the various Copy-Item parameters work, create a test file with the following command:

Get-Process | Out-File -FilePath c:testp1.txt

Use this command to copy a file:

Copy-Item -Path C:testp1.txt -Destination C:test2

The issue with this command is there is no indication if the operation succeeds or fails.

When working interactively, you can use the alias and positional parameters to reduce typing.

Copy C:testp1.txt C:test2

While this works in scripts, it makes the code harder to understand and maintain.

To get feedback on the copy, we use the PassThru parameter:

Copy-Item -Path C:testp1.txt -Destination C:test2 -PassThru

    Directory: C:test2

Mode          LastWriteTime    Length Name

—-          ————-     —— —-

-a—-       13/08/2018  11:01    40670 p1.txt

Or we can use the Verbose parameter:

The Verbose parameter
Administrators can use the Verbose parameter to see detailed output when running PowerShell commands.

The Verbose parameter gives you information as the command executes, while PassThru shows you the result.

By default, PowerShell overwrites the file if a file with the same name exists in the target folder. If the file in the target directory is set to read-only, you’ll get an error.

Copy-Item -Path C:testp1.txt -Destination C:test2

Copy-Item : Access to the path ‘C:test2p1.txt’ is denied.

At line:1 char:1

+ Copy-Item -Path C:testp1.txt -Destination C:test2

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo          : PermissionDenied: (C:testp1.txt:FileInfo) [Copy-Item], UnauthorizedAccessException

+ FullyQualifiedErrorId : CopyFileInfoItemUnauthorizedAccessError,

Microsoft.PowerShell.Commands.CopyItemCommand

You need to be a PowerShell Jedi to overcome this. Use the Force parameter:

Copy-Item -Path C:testp1.txt -Destination C:test2 -Force

As part of the copy process, you can rename the file. You must include the new file name as part of the destination. For example, this code creates nine copies of the p1.txt file called p2.txt through p10.txt.

2..10 | foreach {

 $newname = “p$_.txt”

 Copy-Item -Path C:testp1.txt -Destination C:test$newname

}

PowerShell commands to copy multiple files

There are a few techniques to copy multiple files when using PowerShell.

Copy-Item -Path C:test*.txt -Destination C:test2

Copy-Item -Path C:test*  -Filter *.txt -Destination C:test2

Copy-Item -Path C:test* -Include *.txt -Destination C:test2

These commands copy all the .txt files from the test folder to the test2 folder, but you can also be more selective and only copy files with, for instance, a 6 in the name.

Copy-Item -Path C:test* -Include *6*.txt -Destination C:test2 -PassThru

    Directory: C:test2

Mode        LastWriteTime       Length Name

—-         ————-      —— —-

-a—-       13/08/2018 11:01    40670 p6.txt

-a—-       13/08/2018 11:01    40670 x6.txt

You can also exclude certain files from the copy operation. This command copies all the text files that start with the letter p unless there is a 7 in the name:

Copy-Item -Path C:test*  -Filter p*.txt  -Exclude *7*.txt -Destination C:test2

PowerShell copying
Administrators can fine-tune the PowerShell commands to copy certain files from a folder and exclude others.

You can combine the Path, Filter, Include or Exclude parameters to define exactly what to copy. If you use Include and Exclude in the same call, PowerShell ignores Exclude. You can also supply an array of file names. The path is simplified if your working folder is the source folder for the copy.

Copy-Item -Path p1.txt,p3.txt,x5.txt -Destination C:test2

The Path parameter accepts pipeline input.

Get-ChildItem -Path C:testp*.txt |

where {(($_.BaseName).Substring(1,1) % 2 ) -eq 0} |

Copy-Item -Destination C:test2

PowerShell checks the p*.txt files in the c:test folder to see if the second character is divisible by 2. If so, PowerShell copies the file to the C:test2 folder.

[embedded content]

How to use PowerShell cmdlets to copy, move
and delete files

If you end up with a folder or file name that contains wild-card characters, use the LiteralPath parameter instead of the Path parameter. LiteralPath treats all the characters as literals and ignores any possible wild-card implications.

To copy a folder and all its contents, use the Recurse parameter.

Copy-Item -Path c:test -Destination c:test2 -Recurse

The recursive copy will work its way through all the subfolders below the c:test folder. PowerShell will then create a folder named test in the destination folder and copy the contents of c:test into it.

When copying between machines, you can use UNC paths to bypass the local machine.

Copy-Item -Path \server1fs1testp1.txt -Destination \server2arctest

Another option is to use PowerShell commands to copy files over a remoting session.

$cred = Get-Credential -Credential W16ND01Administrator

$s = New-PSSession -VMName W16ND01 -Credential $cred

In this case, we use PowerShell Direct to connect to the remote machine. You’ll need the Hyper-V module loaded to create the remoting session over the VMBus. Next, use PowerShell commands to copy files to the remote machine.

Copy-Item -Path c:test -Destination c: -Recurse -ToSession $s

You can also copy from the remote machine.

Copy-Item -Path c:testp*.txt -Destination c:test3 -FromSession $s

The ToSession and FromSession parameters control the direction of the copy and whether the source and destination are on the local machine or a remote one. You can’t use ToSession and FromSession in the same command.

Copy-Item doesn’t have any error checking or restart capabilities. For those features, you’ll need to write the code. Here is a starting point:

function Copy-FileSafer {

 [CmdletBinding()]

 param (

   [string]$path,

   [string]$destinationfolder

 )

 if (-not (Test-Path -Path $path)) {

   throw “File not found: $path”

 }

 $sourcefile = Split-Path -Path $path -Leaf

 $destinationfile = Join-Path -Path $destinationfolder -ChildPath $sourcefile

 $b4hash = Get-FileHash -Path $path

 try {

    Copy-Item -Path $path -Destination $destinationfolder -ErrorAction Stop

 }

 catch {

   throw “File copy failed”

 }

 finally {

   $afhash = Get-FileHash -Path $destinationfile

   if ($afhash.Hash -ne $b4hash.Hash) {

      throw “File corrupted during copy”

   }

   else {

     Write-Information -MessageData “File copied successfully” -InformationAction Continue

   }

 }

}

In this script, the file path for the source is tested and a hash of the file is calculated. The file copy occurs within a try-catch block to catch and report errors.

With additional coding, the script can recursively retry a certain number of times. After each copy attempt, the script can calculate the hash of the file and compare it to the original. If they match, all is well. If not, an error is reported.

Create and configure a shielded VM in Hyper-V

Creating a shielded VM to protect your data is a relatively straightforward process that consists of a few simple steps and PowerShell commands.

A shielded VM depends on a dedicated server separate from the Hyper-V host that runs the Host Guardian Service (HGS). The HGS server must not be domain-joined because it is going to take on the role of a special-purpose domain controller. To install HGS, open an administrative PowerShell window and run this command:

Install-WindowsFeature -Name HostGuardianServiceRole -Restart

Once the server reboots, create the required domain. Here, the password is [email protected] and the domain name is PoseyHGS.net. Create the domain by entering these commands:

$AdminPassword = ConvertTo-SecureString -AsPlainText ‘[email protected]’ -Force

Install-HgsServer -HgsDomainName ‘PoseyHGS.net’ -SafeModeAdministratorPassword $AdminPassword -Restart

Install the HGS server.
Figure A. This is how to install the Host Guardian Service server.

The next step in the process of creating and configuring a shielded VM is to create two certificates: an encryption certificate and a signing certificate. In production, you must use certificates from a trusted certificate authority. In a lab environment, you can use self-signed certificates, such as those used in the example below. To create these certificates, use the following commands:

$CertificatePassword = ConvertTo-SecureString -AsPlainText ‘[email protected]’ -Force
$SigningCert = New-SelfSignedCertificate -DNSName “signing.poseyhgs.net”
Export-PfxCertificate -Cert $SigningCert -Password $CertificatePassword -FilePath ‘c:CertsSigningCert.pfx’
$EncryptionCert=New-SelfSignedCertificate -DNSName “encryption.poseyhgs.net”
Export-PfxCertificate -Cert $EncryptionCert -Password $CertificatePassword -FilePath ‘C:certsEncryptionCert.pfx’

Create the certificates.
Figure B. This is how to create the required certificates.

Now, it’s time to initialize the HGS server. To perform the initialization process, use the following command:

Initialize-HGSServer -HGSServiceName ‘hgs’ -SigningCertificatePath ‘C:certsSigningCert.pfx’ -SigningCertificatePassword $CertificatePassword -EncryptionCertificatePath ‘C:certsEncryptionCert.pfx’ -EncryptionCertificatePassword $CertificatePassword -TrustTPM

The initialization process
Figure C. This is what the installation process looks like.

The last thing you need to do when provisioning the HGS server is to set up conditional domain name service (DNS) forwarding. To do so, use the following commands:

Add-DnsServerConditionalForwardZone -Name “PoseyHDS.net” -ReplicationScope “Forest” -MasterServers

Netdom trust PoseyHDS.net /domain:PoseyHDS.net /userD:PoseyHDS.netAdministrator /password: /add

In the process of creating and configuring a shielded VM, the next step is to add the guarded Hyper-V host to the Active Directory (AD) domain that you just created. You must create a global AD security group called GuardedHosts. You must also set up conditional DNS forwarding on the host so the host can find the domain controller.

Once all of that is complete, retrieve the security identifier (SID) for the GuardedHosts group, and then add that SID to the HGS attestation host group. From the domain controller, enter the following command to retrieve the group’s SID:

Get-ADGroup “GuardedHosts” | Select-Object SID

Once you know the SID, run this command on the HGS server:

Add-HgsAttestationHostGroup -Name “GuardedHosts” -Identifier “

Now, it’s time to create a code integrity policy on the Hyper-V server. To do so, enter the following commands:

New-CIPPolicy -Level FilePublisher -Fallback Hash -FilePath ‘C:PolicyHWLCodeIntegrity.xml’

ConvertFrom-CIPolicy -XMLFilePath ‘C:PolicyHwlCodeIntegrity.xml’ -BinaryFilePath ‘C:PolicyHWLCodeIntegrity.p7b’

Now, you must copy the P7B file you just created to the HGS server. From there, run this command:

Add-HGSAttestationCIPolicy -Path ‘C:HWLCodeIntegrity.p7b’ -Name ‘StdGuardHost’

Get-HGSServer

At this point, the server should display an attestation URL and a key protection URL. Be sure to make note of both of these URLs. Now, go back to the Hyper-V host and enter this command:

Set-HGSClientConfiguration -KeyProtectionServerURL “” -AttestationServerURL “

To wrap things up on the Hyper-V server, retrieve an XML file from the HGS server and import it. You must also define the host’s HGS guardian. Here are the commands to do so:

Invoke-WebRequest “/service/metadata/2014-07/metadata.xml” -OutFile ‘C:certsmetadata.xml’

Import-HGSGuardian -Path ‘C:certsmetadata.xml’ -Name ‘PoseyHGS’ -AllowUntrustedRoot

Shield a Hyper-V VM.
Figure D. Shield a Hyper-V VM by selecting a single checkbox.

Once you import the host guardian into the Hyper-V server, you can use PowerShell to configure a shielded VM. However, you can also enable shielding directly through the Hyper-V Manager by selecting the Enable Shielding checkbox on the VM’s Settings screen, as shown in Figure D above.

Microsoft Ignite 2018 conference coverage

Introduction

Microsoft continues to gain market momentum fueled in part by an internal culture shift and the growing popularity of the Azure cloud platform that powers the company’s popular Office 365 product.

When CEO Satya Nadella took the helm in 2014, he made a concerted effort to turn the company away from its proprietary background to win over developers and enterprises with cloud and DevOps ambitions.

To reinforce this new agenda, Microsoft acquired GitHub, the popular software development platform, for $7.5 billion in June and expanded its developer-friendly offerings in Azure — from Kubernetes management to a Linux-based distribution for use with IoT devices. But many in IT have long memories and don’t easily forget the company’s blunders, which can wipe away any measure of good faith at a moment’s notice.

PowerShell, the popular automation tool, continues to experience growing pains after Microsoft converted it to an open source project that runs on Linux and macOS systems. As Linux workloads on Azure continue to climb — around 40% of Azure’s VMs run on Linux according to some reports — and Microsoft releases Linux versions of on-premises software, PowerShell Core is one way Microsoft is addressing the needs of companies with mixed OS environments.

While this past year solidified Microsoft’s place in the cloud and open source arenas, Nadella wants the company to remain on the cutting edge and incorporate AI into every aspect of the business. The steady draw of income from its Azure product and Office 365 — more than 135 million users — as well as its digital transformation agenda, have proven successful so far. So what’s in store for 2019?

This Microsoft Ignite 2018 guide gives you a look at the company’s tactics over the past year along with news from the show to help IT pros and administrators prepare for what’s coming next on the Microsoft roadmap. 

1Latest news on Microsoft

Recent news on Microsoft’s product and service developments

Stay current on Microsoft’s new products and updated offerings before and during the Microsoft Ignite 2018 show.

2A closer look

Analyzing Microsoft’s moves in 2018

Take a deeper dive into Microsoft’s developments with machine learning, DevOps and the cloud with these articles.

3Glossary

Definitions related to Microsoft products and technologies