Category Archives: Active Directory and Group Policy

Active Directory and Group Policy

Roll your own Windows patching tool with PowerShell

Manage
Learn to apply best practices and optimize your operations.

This tutorial based on PowerShell helps administrators build an automated routine that audits Windows machines, then applies missing patches to lighten this management task.



It’s a necessary but loathsome activity for just about every systems administrator: Windows patching.

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

Windows systems get patched via Microsoft Update. There are many Windows patching tools that help with this procedure.

Windows Server Update Services (WSUS) is free, but lacks some tooling for administrators who might want to fine-tune the process. System Center Configuration Manager enables administrators to build highly customized patching rollouts, but it requires some time to learn — and it can be a sizeable expense for some organizations. Regardless of your choice, each uses the built-in Windows Update Agent to connect to Microsoft to obtain new updates.

If a commercial Windows patching tool is too costly and the limitations of a free tool are too constraining, there is the option to create your own automated procedure. There are several advantages to this approach. The main perk is the flexibility to build a Windows patching system that matches the organization’s needs. But this method requires specific expertise and will take a significant amount of work to build.

To start construction of a Windows patching tool, it helps to think about the details behind the process before writing a single line of code; for example:

  • How do you target the systems that need patches?
  • What source do you use for the patches?
  • Which patches do you apply?
  • How do you deliver the patches and install them?

There are several ways to handle these tasks, but in this article, we will address those areas in this fashion:

  • Targeting: via Active Directory organizational unit (OU);
  • Patch source: Microsoft Update;
  • Patch type: all critical patches; and
  • Delivery: use PowerShell to remotely invoke the Windows Update Agent.

The administrator can configure all the options at the time of patching, except perhaps the patch source. The Windows Update Agent uses the registry — possibly through a group policy object — to determine if the updates will come from Microsoft Update or a local WSUS server. A Windows patching tool built on PowerShell will use the source set in the Windows Update agent.

To start, we will use a prebuilt PowerShell module I developed called WindowsUpdate. Download and install the module. To see a list of available commands, enter:

Get-Command -Module WindowsUpdate

Next, query a list of computers to update. For this article, we’ll use a single Active Directory OU, but the source can be anything from a database, CSV file or an Excel spreadsheet, for example. We’ll use the Active Directory module included with Microsoft’s Remote System Administration Tools package.

After installing that module, we can query AD computers with the Get-AdComputer cmdlet. To find all computers in a single OU, use the SearchScope and SearchBase parameters. With the command below, we can find computers in the Servers OU from the domain mylab.local and return their names:

$computerToPatch = Get-AdComputer -SearchScope Base -SearchBase ‘OU=Servers,DC=mylab,DC=local’ | Select-Object -ExpandProperty Name

Next, let’s target a machine. When I use a new tool, I usually retrieve the existing state of the machine first. I perform a Get operation as a test for the tool and assess the current patch state. The command below queries the first computer in our variable and finds all the available updates that are not installed. By default, it just checks for missing updates:

Get-WindowsUpdate -ComputerName $computersToPatch[0]

Once you’ve seen the output and you’re comfortable with the patches the tool will install, use the Install-WindowsUpdate command to force the Windows Update agent on the remote computer to download and install the missing updates.

Install-WindowsUpdate -ComputerName $computersToPatch[0] -ForceReboot

Notice we’ve chosen to force a reboot on the machine if needed. By default, Install-WindowsUpdate does not attempt to reboot the computer if an update requires it.

We can take things a step further and install updates on all the target computers. In PowerShell, we can use a ForEach loop to iterate through each computer name in the $computersToPatch array and run Install-WindowsUpdate against each one.

foreach ($computer in $computersToPatch) {

Install-WindowsUpdate -ComputerNBame $computer -ForceReboot
}

The loop goes through each computer in the Servers OU, checks each for missing patches, installs them and reboots the machine to complete the update process.

This basic demonstration shows what’s possible with a free PowerShell tool. Open up the code for these commands and give them a closer look to see where a few modifications might work better with your environment.

Dig Deeper on Windows Operating System Management



Monitor Active Directory replication via PowerShell

breaks down, administrators need to know quickly to prevent issues with the services and applications that Active Directory oversees.

It is important to monitor Active Directory replication to ensure the process remains healthy. Larger organizations that use Active Directory typically have several domain controllers that rely on replication to synchronize networked objects — users, security groups, contacts and other information — in the Active Directory database. Changes in the database can be made at any domain controller, which must then be duplicated to the other domain controllers in an Active Directory forest. If the changes are not synchronized to a particular domain controller — or all domain controllers — in an Active Directory site, users in that location might encounter problems.

For example, if an administrator applies a security policy setting via a Group Policy Object to all workstations, all domain controllers in a domain should pick up the GPO changes. If one domain controller in a particular location fails to receive this update, users in that area will not receive the security configuration.

Why does Active Directory replication break?

Active Directory replication can fail for several reasons. If network ports between the domain controllers are not open or if the connection object is missing from a domain controller, then the synchronization process generally stops working.

Since domain controllers rely on the domain name system, if their service records are missing, the domain controllers will not communicate with each other, which causes a replication failure.

Check Active Directory replication status manually

There are many ways to check the Active Directory replication status manually.

Administrators can run the following string using the command-line repadmin utility to show the replication errors in the Active Directory forest:
repadmin /replsum /bysrc /bydest /errorsonly

Administrators can also use the Get-ADReplicationPartnerMetadata PowerShell cmdlet to check the replication status, which is used in the script further in this article.

Use a script to check replication health

While larger organizations might have an enterprise tool, such as System Center Operations Manager, to monitor Active Directory, a PowerShell script can be a helpful supplement to alert administrators on the replication status. Because so much of a business relies on a properly functioning Active Directory system, it can’t hurt to implement this script and have it run every day via a scheduled task. If the script finds an error, it will send an alert via email.

The system must meet a few requirements before executing the script:

  • It runs on a computer that reaches all domain controllers.
  • It is recommended to use a computer that runs Windows Server 2012 R2 or a Windows 10 computer joined to a domain in the Active Directory forest.
  • The computer has the Active Directory PowerShell modules installed.

How does the script work?

The PowerShell script uses the Get-ADReplicationPartnerMetadata cmdlet, which connects to a primary domain controller emulator in the Active Directory forest and then collects the replication metadata for each domain controller.

The script checks the value of the LastReplicationResult attribute for each domain controller entry. If the value of LastReplicationResult is zero for any domain controller, the script considers this a replication failure. If this error is found, the script executes the Send-MailMessage cmdlet to send the email with a copy of the report file in a CSV file. The script stores the replication report in C:TempReplStatus.CSV.

The settings in the script should be modified to use the email address to send the message along with the subject line and message body.

PowerShell script to check replication status

The following PowerShell script helps admins monitor Active Directory for these replication errors and delivers the findings via email. Be sure to modify the email settings in the script.

$ResultFile = “C:TempReplStatus.CSV”

$ADForestName = “TechTarget.com”

$GetPDCNow =Get-ADForest $ADForestName | Select-Object -ExpandProperty RootDomain | Get-ADDomain | Select-Object -Property PDCEmulator

$GetPDCNowServer = $GetPDCNow.PDCEmulator

$FinalStatus=”Ok”

 

Get-ADReplicationPartnerMetadata -Target * -Partition * -EnumerationServer $GetPDCNowServer -Filter {(LastReplicationResult -ne “0”)} | Select-Object LastReplicationAttempt, LastReplicationResult, LastReplicationSuccess, Partition, Partner, Server | Export-CSV “$ResultFile” -NoType -Append -ErrorAction SilentlyContinue

 

$TotNow = GC $ResultFile

$TotCountNow = $TotNow.Count

IF ($TotCountNow -ge 2)

{

    $AnyOneOk = “Yes”

    $RCSV = Import-CSV $TestCSVFile

    ForEach ($AllItems in $RCSV)

    {

        IF ($AllItems.LastReplicationResult -eq “0”)

        {

            $FinalStatus=”Ok”

            $TestStatus=”Passed”

            $SumVal=””

            $TestText=”Active Directory replication is working.”

        }

        else

        {

            $AnyGap = “Yes”

            $SumVal = “”

            $TestStatus = “Critical”

            $TestText=”Replication errors occurred. Active Directory domain controllers are causing replication errors.”

            $FinalStatus=”NOTOK”           

            break

        }

    }

}

$TestText

 

IF ($FinalStatus -eq “NOTOK”)

{

    ## Since some replication errors were reported, start email procedure here…

 

### START – Modify Email parameters here

$message = @”                                

Active Directory Replication Status

 

Active Directory Forest: $ADForestName

                                  

Thank you,

PowerShell Script

“@

 

$SMTPPasswordNow = “PasswordHere”

$ThisUserName = “UserName”

$MyClearTextPassword = $SMTPPasswordNow

$SecurePassword = Convertto-SecureString –String $MyClearTextPassword –AsPlainText –force

$ToEmailNow =”EmailAddressHere”

$EmailSubject = “SubjectHere”

$SMTPUseSSLOrNot = “Yes”

$SMTPServerNow = “SMTPServerName”

$SMTPSenderNow = “SMTPSenderName”

$SMTPPortNow = “SMTPPortHere”

 

### END – Modify Email parameters here

 

$AttachmentFile = $ResultFile

 

$creds = new-object -typename System.Management.Automation.PSCredential -argumentlist “$ThisUserName”, $SecurePassword

Send-MailMessage -Credential $Creds -smtpServer $SMTPServerNow -from $SMTPSenderNow -Port $SMTPPortNow -to $ToEmailNow -subject $EmailSubject -attachment $AttachmentFile -UseSsl -body $message

}

When the script completes, it generates a file that details the replication errors.

Replication error report
The PowerShell script compiles the Active Directory replication errors in a CSV file and delivers those results via email.

Administrators can run this script automatically through the Task Scheduler. Since the script takes about 10 minutes to run, it might be best to set it to run at a time when it will have the least impact, such as midnight.

What is Windows Server Core ? – Definition from WhatIs.com

Windows Server Core is a minimal installation option for the Windows Server operating system (OS) that has no GUI and only includes the components required to perform server roles and run applications.

The smaller code base in Server Core reduces the amount of resources required to run the OS, takes up less disk space and lowers Server Core’s exposure to outside threats. Microsoft removed the GUI, which frees more RAM and compute resources on the server, to run more — or more demanding — workloads, which can benefit highly virtualized environments.

The full Windows Server 2016 RTM installation takes about 10 GB of disk space, while the Server Core installation takes up about 6 GB of disk space. With fewer processes and services running the OS, there is less chance that an attacker can use an unpatched exploit to enter the organization’s network. Server Core eases management overhead with fewer configuration options to limit the issues that occur when an administrator applies an incorrect setting.

Server Core management can challenge less technically adept IT pros. The lack of a GUI requires the administrator to have a high level of proficiency with PowerShell. An organization needs to perform a thorough test of workloads on Server Core to ensure there are no issues with remote management before a move to the production environment.

Server Core is available in both the Windows Server Semi-Annual Channel and Long-Term Servicing Channel releases. Microsoft supports Windows Server products in the Long-Term Servicing Channel with five years of mainstream support, five years of extended support and an option for six additional years through Microsoft’s Premium Assurance program. Microsoft supports Windows Server products in the Semi-Annual Channel for 18 months from each release.

Windows Server Core management

Because it has no GUI, administrators manage Server Core with either PowerShell or various remote tools, such as Remote Server Administration Tools (RSAT), Remote Desktop Services or Server Manager.

Microsoft developed a number of PowerShell cmdlets for various administrative tasks required to deploy and manage Server Core. A more advanced shop can build PowerShell scripts to automate complex workflows for frequently performed procedures. An administrator can use a remote PowerShell session to connect to the Server Core installation to execute the cmdlets.

[embedded content]

How to configure Windows Server
2016 Server Core

RSAT consists of a number of tools — Microsoft Management Console snap-ins, PowerShell cmdlet modules and command-line utilities — to manage the roles and features for Server Core. RSAT runs on a Windows client machine.

An admin can also use the Microsoft Server Configuration Tool — known as sconfig.cmd — to handle the initial configuration of a Server Core installation. The utility restarts and shuts down the server, adjusts Windows Update settings, enables the Remote Desktop Protocol and renames the host.

Uses for Windows Server Core

Server Core deployments are ideal for enterprises that need to deploy and maintain a large number of servers. Microsoft recommends Server Core for servers that require minimal administration once deployed for specific infrastructure roles, such as domain controllers and certificate authorities.

Microsoft recommends Server Core in Windows Server 2016 for the following roles: Active Directory (AD) Certificate Services, AD Domain Services, AD Lightweight Directory Services, AD Rights Management Services, Dynamic Host Configuration Protocol server, Domain Name System server, File Services, Hyper-V, licensing server, print and document services, Remote Desktop Connection Broker, Routing and Remote Access service, streaming media services, web server, Windows Server Update Services and Volume Activation Services.

History of Windows Server Core

Microsoft introduced Server Core with the release of Windows Server 2008. This installation option removed features and services not required to run the most common server roles. This version had limitations that held back adoption by administrators. There was no option to switch between Server Core and the full GUI version; if further modifications required the GUI, the admin needed to reinstall the OS. This release did not support certain administrative features, such as PowerShell remoting.

In Windows Server 2012, Microsoft made Server Core the default option for installation. The admin could use PowerShell to switch back and forth between the GUI if it was needed to install a driver or to perform another task that required the graphical interface. Once the administrator finished that job, the GUI component could be removed. Microsoft added an integrated scripting environment to the Server Core interface.

In Windows Server 2016, Microsoft removed the ability to convert Server Core to a full Windows Server with the GUI — also known as Server with Desktop Experience. Users need to perform a new installation to get the GUI with Windows Server.

Disadvantages of Windows Server Core

The lack of a GUI in Server Core is one drawback for some IT departments if administrators are not comfortable using PowerShell and remote management. A problem with a system that runs Server Core could tie up an inexperienced technician who must research how to use cmdlets or an unfamiliar utility when the issue could be resolved quicker if there was access to the GUI.

Server Core supports a large number of server roles, but there are quite a few that are not compatible with this OS. Also, many third-party applications require a GUI and do not support Server Core.

Windows Server 2016 removed the ability to switch a Server Core installation to the full GUI version — also known as Server with Desktop Experience — which took away the flexibility preferred by some administrators.

Windows Server Core vs. Nano Server

Microsoft released the initial version of Nano Server in Windows Server 2016 RTM as a separate installation option, and originally promoted it as an even smaller server deployment than Server Core at around 400 MB on disk.

In June 2017, Microsoft decided to rework Nano Server from a minimal server deployment option for infrastructure roles to a container-only image in the Windows Server 2016 release version 1709. This move stripped Nano Server’s servicing stack and numerous other components required to run various server roles, such as DNS and file server. The company recommends organizations use Server Core as a host for virtual machines (VMs), containers and traditional infrastructure workloads. 

Use Azure Storage Explorer to manage Azure storage accounts

You might have used third-party tools to manage Azure storage accounts — including managing storage blobs, queues…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

and table storages — and VM files in the past, but there’s another option. Microsoft developed an Azure storage management tool that can manage multiple Azure storage accounts, which helps increase productivity. Meet certain requirements before installing the tool, and you can realize other benefits of using Azure Storage Explorer, such as performing complete Azure storage operational tasks from your desktop in a few simple steps.

Azure Storage Explorer was released in June 2016. Although Azure Storage Explorer is in preview, many organizations use it to efficiently manage Azure storage accounts. There are several previous versions of Azure Storage Explorer, but the latest version that is reliable and is in production use is 0.8.16.

Benefits of using Azure Storage Explorer

One of the main benefits of using Azure Storage Explorer is that you can perform Azure storage operations-related tasks — copy, delete, download, manage snapshots. You can also perform other storage-related tasks, such as copying blob containers, managing access policies configured for blob containers and setting public access levels, from a single GUI installed on your desktop machine.

Another benefit of using this tool is that if you have Azure storage accounts created in both Azure Classic and Azure Resource Manager modes, the tool allows you to manage Azure storage accounts for both modes.

You can also use Azure Storage Explorer to manage storage accounts from multiple Azure subscriptions. This helps you track storage sizes and accounts from a single UI rather than logging into the Azure portal to check the status of Azure storage for a different Azure subscription.

Azure Storage Emulator, which must be downloaded separately,  allows you to test code and storage without an Azure storage account. Apart from managing storage accounts created on Azure, Azure Storage Explorer can connect to other storage accounts hosted on sovereign clouds and Azure Stack.

Requirements and installing Azure Storage Explorer

Azure Storage Explorer requires minimum resources on the desktop and can be installed on Windows Client, Windows Server, Mac and Linux platforms. All you need to do is download the tool and then install it. The installation process is quite simple. Just proceed with the onscreen steps to install the tool. When you launch the tool for the first time, it will ask you to connect to an Azure subscription, but you can cancel and add an Azure subscription at a later stage if you want to explore the options available with the tool. For example, you might want to modify the proxy settings before a connection to Azure subscriptions can be established.

Configuring proxy settings

It’s important to note that, because Azure Storage Explorer requires a working internet connection and because many of the production environments have a proxy server deployed before someone can access the internet, you’ll be required to modify the proxy settings in Azure Storage Explorer by navigating to the Edit menu and then clicking Configure Proxy as shown in Figure A below:

Azure Storage Explorer proxy server settings
Figure A. Launching the proxy server settings page

When you click on Configure Proxy, the tool will show you the Proxy Settings page as shown in Figure B below. From there, you can enter the proxy settings and then click on OK to save the settings.

Proxy setting configuration
Figure B. Configuring proxy settings in Azure Storage Explorer

When you configure proxy settings in Azure Storage Explorer, the tool doesn’t check whether the settings are correct. It just saves the settings. If you run into any connection issues, please make sure that the proxy settings are correct and that you have a reliable internet connection.

How to use Azure Storage Explorer

If you’ve worked with third-party Azure storage management tools, you’re already familiar with storage operational tasks, such as uploading VHDX files and working with blob containers, tables and queues. Azure Storage Explorer provides the same functionality, but the interface might be different than the third-party storage management tools you’ve worked with thus far. The first step is to connect to an Azure account by clicking on the Manage Accounts icon and then clicking Add an Account. Once it is connected, Azure Storage Explorer will retrieve all the subscriptions associated with the Azure account. If you need to work with storage accounts in an Azure subscription, first select the subscription, and then click Apply. When you click Apply, Azure Storage Explorer will retrieve all of the storage accounts hosted on the Azure subscription. Once storage accounts have been retrieved, you can work with blob containers, file shares, queues and tables from the left navigation pane as shown in Figure C below:

Storage accounts in Azure Storage Explorer
Figure C. Working with storage accounts in Azure Storage Explorer

If you have several Azure storage accounts, you can search for a particular storage account by typing in the search box located on top of the left pane as it is shown in Figure C above. Azure Storage Explorer provides easy management of blob containers. You can perform most blob container-related tasks, including creating a blob, setting up public access for a blob and managing access policies for blobs. As you know, by default, a blob container has public access disabled. If you want to enable public access for a blob container, click on a blob container in the left navigation pane, right-click on the blob container and then click on Set Public Access Level… to display the Set Container Public Access Level page shown in Figure D below.

Blob container public access level
Figure D. Setting public access level for a blob container

Next Steps

Learn more about different Azure storage types

Navigate expanded Microsoft Azure features

Enhance cloud security with Azure Security Center

Azure DevTest Labs offers substitute for on-premises testing

Azure DevTest Labs brings a consistent development and test environment to cost-conscious enterprises. The service…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

also gives admins the chance to explore Azure’s capabilities and determine other ways the cloud can assist the business.

A DevTest Lab in Azure puts a virtual machine in the cloud to verify developer code before it moves to the organization’s test environment. This practice unveils initial bugs before operations starts an assessment. DevTest Labs gives organizations a way to investigate the Microsoft cloud platform and its compute services, without incurring a large monthly cost. Look at Azure DevTest Labs as a way to augment internal tests — not replace them.

Part one of this two-part series explains Azure DevTest Labs and how to configure a VM for lab use. In part two, we examine the benefits of a testing cloud-based environment.

DevTest Labs offers a preliminary look at code behavior

After we create a lab with a server VM, connect to it using the same tools as you would in an on-premises environment — Visual Studio or Remote Desktop for Windows VMs and Secure Socket Shell for Linux VMs. Development teams can push the code to an internal repository connected to the Azure environment and then deploy it to the DevTest Lab VM.

Use the DevTest Lab VM to check what happens to the code:

  • when no modifications have been made to infrastructure; and
  • if the application runs on different versions of an OS.

Windows Server VMs in Azure provide uniformity

An organization’s test environment often has stipulations, such as a requirement to mirror the production Windows Servers through the last patch cycle, which can hinder the development process. Azure DevTest Labs uncovers how applications behave on the latest Windows Server version. This prepares IT for any issues before the internal testing environment moves to that server OS version. IT also can use DevTest Labs to check new features of an OS before they roll it out to production.

DevTest Labs assists admins who want to study for a certification and need a home lab environment to practice and study. But building a home lab is expensive when you consider costs for storage, server hardware and software. Virtualized labs with VMware Workstation or Client Hyper-V reduce this cost, but it’s still expensive to buy a powerful laptop that can handle all the new technologies in a server OS.

Admins can stand up Windows Server 2016 in DevTest Labs to understand the capabilities of the OS and set up an automatic shutdown time. This gives employees access to capable systems for after-hours studying, and the business only pays for the time the lab runs.

Azure DevTest Labs doesn’t replace on-premises testing

Many organizations have replica environments that mirror production sites, which ensures any fixes and changes will function properly when they go live. Azure DevTest Labs should not replace an on-premises test environment.

[embedded content]

Steps to produce an Azure DevTest
Lab.

Implement DevTest Labs to prevent testing delays; start work in DevTest Labs, which refine the items needed from operations. And because Azure is built to scale, users can add resources with a few clicks. An on-premises environment does not have the same flexibility to grow on demand, which can slow the code development process.

Production apps don’t have to stay in Azure

Azure DevTest Labs also checks applications or configurations, and then deploys them into the company’s data center. When the test phase of development passes, shut down the DevTest Lab until it is needed again.

In addition, IT teams can turn to DevTest Labs to showcase how the business can use Azure cloud. If the company wants to work with a German organization, for example, it must contend with heavy regulations about how data is handled and who owns it. Rather than build a data center in Germany, which could be cost-prohibitive, move some apps into an Azure region that covers the European Union or Germany. This is much less expensive because the business only pays for what it uses.

Still, regulatory issues override all the good reasons to use Azure. If you’re unsure of what regulatory items your organizations needs to know, use this link to get a list. You also can examine Microsoft’s audit reports to perform a risk assessment and see if Azure meets your company’s compliance needs.

Microsoft offers a 30-day free trial of DevTest Labs. It’s a great resource for development and testing, and provides an inexpensive learning environment for administrators who want to explore current and upcoming technologies.

Next Steps

Don’t let a test VM affect the production environment

Explore OpenStack’s capabilities with a virtual home lab

Use a Hyper-V lab for certification studies

Expect service providers to ease Azure Stack deployment

Microsoft is about to release Azure Stack, after two years and many bumps in the road. Despite the hoopla, it’s unclear just how many customers will be there to warmly greet the new arrival.

Microsoft has said that Azure Stack offers both infrastructure as a service (IaaS) and platform-as-a-service capabilities. As such, it brings the perks of the cloud service down into the data center. This might tempt businesses long frustrated with tangled, difficult-to-manage multicloud setups, said Mike Dorosh, an analyst at Gartner.

Dorosh said that, given the product’s complex licensing terms, he doubts many IT shops would opt for an Azure Stack deployment directly from a Microsoft hardware partner — at least initially. Dell EMC, Hewlett Packard Enterprise, Lenovo, Avanade and Huawei offer Azure Stack hardware bundles.

Microsoft designed Azure Stack deployment to be a simple process. Jeffrey Snover, a Microsoft technical fellow, said the installation should be quick and its complexity largely obscured by Microsoft and the hardware vendor. But Dorosh also said he predicts it will test businesses as they attempt to migrate and refactor existing apps and develop and deploy new apps onto Azure Stack.

“Then, the challenge becomes: You don’t have the skills and the tools and the knowledge or the staff to work it,” Dorosh said.

Other factors will likely slow initial adoption. Businesses that have recently invested in a private cloud or their infrastructure won’t replace these new investments with Azure Stack, Dorosh said. He also expects to hear concern about licensing and the speed of Microsoft’s updates.

Questions linger on Microsoft licensing

Azure Stack could confuse customers with its different fee models. Microsoft uses a consumption model for five Azure Stack services: Base virtual machine; Windows Server virtual machine; Azure Blob Storage; Azure Table and Queue Storage; and Azure App Service. Businesses can use existing licenses to reduce costs.

A company can subscribe to Azure Stack on a base VM charge of $0.008 per virtual CPU per hour or $6 per vCPU per month. Without a license, a Windows Server VM will cost $0.046 per vCPU per hour or $34 per vCPU per month. There are also options for when there is no public internet connection, called disconnected, and fixed-fee models. An IaaS package costs $144 per core per year, and adding an app service brings it to $400 per core per year.

Dorosh said he expects businesses to get better terms from Microsoft on Azure Stack deployment than with similar offerings, such as Azure Pack, because it will bundled into the product. However, Microsoft must also streamline its licensing terms to avoid confusion. For example, if a service provider has an SQL database with multiple SQL licenses, it will need to translate those licenses to the Azure Stack model.

“[Microsoft used to say] it depends on where you bought it and which programs you bought it under,” Dorosh said. “But now, [customers] want to know, ‘Can I move my SQL license or not? Yes or no?'”

Customers must also make frequent updates to Azure Stack to continue to receive support. A company must apply a Microsoft update within six months, but service providers want Microsoft to push adopters to stay within two months of the regular patches, Dorosh said. Falling six months behind would leave both service providers and Azure Stack users at a disadvantage.

“The further you fall behind, the less Azure you are,” Dorosh said. “You’re no longer part of the Azure cloud family — you’re Azure-like.”

More Azure Stack coverage

  • One size won’t fit all for Azure Stack debut: Initially, Azure Stack will only be offered as a one-rack deployment. Microsoft said it might extend to multirack deployments by early 2018. For now, the one-rack deployment could dampen interest in Azure Stack at larger businesses that don’t want to extend hosting into the Azure public cloud.
  • Analysts say Azure Stack will outpace VMware on Amazon Web Services: Both Azure Stack and VMware Cloud on AWS are expected to hit the hybrid cloud technology market in September. Even though VMware Cloud on AWS targets the world’s largest cloud service provider, analysts expect Azure Stack to sell better. A leading reason is that many Azure Stack customers will be migrating data with one vendor — from a Microsoft-operated data center to the Azure public cloud — while VMware Cloud on AWS requires you to use technologies from different vendors.
  • Azure Stack architect addresses delay: When Microsoft first announced Azure Stack in May 2015, the plan was to release it by the end of 2016. The company then pushed the release to September 2017. Snover, the Azure Stack architect, told SearchWindowsServer in June that the code was not ready for the original launch date. “As much as possible, we are trying to be Azure-consistent,” he said, and the effort to convert Azure to work on premises required more time.
  • Azure Stack isn’t a steppingstone to public cloud: Microsoft anticipates its Azure Stack customers will be businesses that have a long-term plan for hybrid cloud deployment. Although you could use Azure Stack as a “migration path to the cloud,” as Julia White, Microsoft corporate vice president for Azure, put it, the software provider’s internal research suggests that won’t be the case: Eighty-four percent of customers have a hybrid cloud strategy, and 91% of them look at hybrid cloud computing as a long-term workflow. Microsoft expects companies with data sovereignty issues will look to Azure Stack as a way to get cloud computing while keeping data in-house.

Preserve your AD organizational unit with these commands

access to resources across the network. If a piece of this directory service gets deleted inadvertently during maintenance work, however, it can bring the company to its knees.

AD organizational units (OU) arrange systems, users and other AD OUs into a specific order. But the accidental removal of an AD organizational unit can cause a massive disruption. For example, if a sysadmin deletes the OU that holds certain user accounts, those workers can’t log in to their PCs. Until an administrator recovers the OU, productivity will suffer. Even though Active Directory has a Recycle Bin, a complete recovery can take several hours in a large organization.

Check that each AD organizational unit is protected quickly using a PowerShell script.

Determine the protection status for one unit

To check the protection setting of a single AD organizational unit — for example, the ComputersOU unit — use the Identity parameter:

Get-ADOrganizationalUnit –Identity “OU=ComputersOU, DC=TechTarget, DC=Com” –Properties ProtectedFromAccidentalDeletion

The ProtectedFromAccidentalDeletion property will return a FALSE value if the AD organizational unit is not protected.

Are all AD organizational units protected?

To identify the protection status of OUs in all AD domains, use the PowerShell script below. It collects all OUs, looks at the protection setting of them and then saves the results to a CSV file.

$ReportFile=”C:TempOUProtectionStatus.CSV”

Remove-item $ReportFile -ErrorAction

$ThisStr=”OU Name, OU Path, In AD Domain, Final Status”

Add-Content “$TestCSVFile” $ThisStr

$DomainList = “C:TempDomainList.TXT”

ForEach ($DomName in Get-Content “$DomainList”)

{

    $AllOUs = Get-ADOrganizationalUnit -Server $DomName -filter * -Properties * | where {$_.ProtectedFromAccidentalDeletion -eq $false}

    $TotOUNow = $RAllOU.Count

    IF ($TotOUNow -ne 0)

    {

        ForEach ($Item in $AllOUs)

        {

            $FinalSTR = ‘”‘+$Item.Name+'”‘+”,”+'”‘+$Item.DistinguishedName+'”‘+”,”+$ThisDomain+”,Not Ok”

            Add-Content “$ReportFile” $FinalSTR

        }

    }

The script generates a report file with the OU name, OU distinguished path, OU domain name and the OU protection-setting status.

Protection status results
Figure 1. A PowerShell script can check the protection settings for all AD organizational units and produce a CSV file with the results.

The script’s results indicate that the protection setting for UsersOU, ComputersOU, ServersOU and domain controllers is not enabled. The script collects the OU distinguished name to make it easier to locate the AD organizational unit and then enable the protection setting.

To turn on the protection for one or all AD organizational units in domain, use the Set-ADOrganizationalUnit cmdlet.

Stand up infrastructure on a budget with Azure DevTest Labs

Many businesses expect IT teams to do more without giving them more money — and, sometimes, cutting an already…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

small budget. But new projects mean test and development — an expensive endeavor in the data center. One way to alleviate this financial strain is to move those test and development workloads into the cloud.

Running a test environment in the data center is expensive, with high costs connected to hardware, software, power and cooling — not to mention all the time and effort IT spends keeping everything running and properly updated. Instead, administrators can turn to the cloud to develop and test applications in Microsoft’s Azure DevTest Labs. This enables companies to trade in hardware expenses and switch to a pay-per-use model. Other features in the service, such as the auto shutdown for VMs, can further control costs.

In this first part of a two-part series, we explain the merits of using a test bed in Azure and configuring a VM for lab use. In part two, we explore ways to manage the VM in DevTest Labs, as well as benefits gained when a workload moves out of the data center.

What is Azure DevTest Labs?

Many businesses maintain an on-premises test environment that emulates the production environment, which lets development teams test code before it is pushed into production. This also enables other teams within the app dev team to perform usability and integration testing.

But a test environment can have slight variations from the production side. It might not have key updates or patches, or it could run on different hardware or software. These disparities cause the application to fail when it hits the production environment. Azure DevTest Labs address these issues, enabling admins to build an infrastructure that is disposable and adaptable. If the test environment requires drastic changes, the team can remove it and build a new one with minimal effort. In contrast, a typical on-premises production setting generally cannot be offline for very long; the investment in hardware, software and other infrastructure requires lengthy deliberation before IT makes any changes.

The team can turn off DevTest Labs when the test period ends so that resources go away, and there are no costs until the service is needed again.

Creating another lab scenario to test a new feature removes the effort to twist and tweak an existing test environment to bring necessary components online, which can cause problems with other testing scenarios. An on-premises test environment requires sizable expense and effort to maintain and keep in sync with production. In contrast, admins can quickly configure a test setting in Azure DevTest Labs.

What are the benefits of Azure DevTest Labs?

The most noticeable benefits to DevTest Labs include:

  • Pay as you go pricing: The lab only incurs cost when a VM runs. If the VM is deallocated, there are no charges.
  • Specified shutdown: IT staff can configure DevTest Labs to shut down at a certain time and automatically disconnect users. Turning the service off — for example, shutting it down between 5 p.m. and 8 a.m. — saves money.
  • Role-based access: IT assigns certain access rights within the lab to ensure specific users only have access to the items they need.

How do I get started with Azure DevTest Labs?

To set up Azure DevTest Labs, you’ll need an Azure subscription. Sign up for a 30-day trial from the Microsoft Azure site. Go to the Azure Resource Management portal, and add the DevTest Labs configuration from the Azure Marketplace with these steps:

  • Select the New button at the top of the left column in the Azure portal. This will change the navigation pane to list available categories of services and the main blade to a blank screen. As you make selections, this will populate with related information.
  • In the search box, enter DevTest Labs, and press Enter.
  • In the blade that displays the search results, click on DevTest Labs. This will display more information about DevTest Labs and a Create button.
Install Azure DevTest Labs
Figure 1. Find the option to add the Azure DevTest Labs to your subscription from the Azure Marketplace.

Click the Create button. Azure will prompt you to enter configuration settings for the instance, such as:

  • The name of the lab: The text box shows a green checkmark if the value is acceptable.
  • The Azure subscription to use
  • The region where the DevTest Lab will reside: Pick a region closest to user(s) for better performance.
  • If auto shutdown should be enabled: This is enabled by default; all VMs in the lab will shut down at a specified time.

Enter values for these options; items marked with a star are required. Click Create, and Azure will provision the DevTest Labs instance. This typically takes a few minutes to gather the background services and objects needed to build the lab. Click the bell icon in the header area of the Azure portal screen to see the progress for this deployment.

DevTest Labs provisioning
Figure 2. Click the bell-shaped icon in the Azure portal to check the provisioning progress of the DevTest Labs instance.

Once Azure provisions the lab, you can add objects and resources to it. Each lab gets a resource group within Azure to keep all the items packaged. The resource group takes the name of the lab with some random characters at the end. This ensures the resource group name for the lab is unique and ensures the admin manages its resources through DevTest Labs.

To find the lab, select the option for DevTest Labs from the left navigation pane. For new users, it might be listed under More Services at the bottom. When the lab is located, scroll down to the Developer Tools section, and click the star icon next to the service name to pin DevTest Labs to the main navigation list.

Click DevTest Labs in the navigation list to open the DevTest Labs blade and list all the labs. Click on the name of the new lab: techTarget — for the purposes of this article.

Azure DevTest Labs environment
Figure 3. After Azure provisions the lab, the administrator can add compute and other resources.

This opens the blade for that lab. The administrator can populate the lab with compute and other resources. New users should check the Getting Started section to familiarize themselves with the service.

What components can we put in the lab?

DevTest Labs creates sandbox environments to test applications in development or to see how a feature in Windows Server performs before moving it to a production environment.

Administrators can add components to each lab, including:

  • VMs: Azure uses VMs from the Marketplace or uploaded images.
  • Claimable VMs: The IT department provides a pool of VMs for lab users to select.
  • Data disks: You can attach these disks to VMs to store data within a lab.
  • Formulas: Reusable code and automation objects are available to objects within the lab.
  • Secrets: These are values, such as passwords or keys, the lab needs. These reside in a secure key vault within the Azure subscription.

Administrators can modify configuration values and policies related to the lab, change the auto startup and auto shutdown times and specify machine sizes that users can create. To find more information on these items, select My virtual machines under MY LAB in the navigation list. Click Add at the top of the blade to insert a VM.

Add a new VM
Figure 4. Create a new VM with the Add button in the lab.

For the purposes of this article, select Windows Server 2016 Datacenter as the VM base image. The next blade shows the following items that are required to build the VM:

  • VM name: A unique name for the VM.
  • Username: The admin username for this VM — it cannot be administrator.
  • Disk type: Options include solid-state drive or hard disk drive — SSD provides better performance, but will raise the cost of operations slightly.
  • VM size: The number of CPU cores and amount of RAM — after selecting the one you want, click Select.
Configure the lab VM
Figure 5. Make selections to build the VM for the lab. The blades show the options and prices based on the size of the VM.

You can also select artifacts to install when the VM is created, and configure advanced options for the resource. Find more information about artifacts at Microsoft’s Azure documentation site.

For labs with more complex needs, advanced settings let administrators adjust the VM’s networking settings and set the VM as claimable.

When you finish the lab VM configuration, click Create. Azure will do its work, which will take some time to complete.

In the next installment of this article, we will look at VM management in Azure DevTest Labs and different testing scenarios within the service.

Next Steps

A Hyper-V lab can help with certification studies

Explore OpenStack’s capabilities with a virtual home lab

Keep a test VM from affecting the production environment

Powered by WPeMatico

Automate Active Directory jobs with PowerShell scripts

Most IT professionals have some experience with Active Directory, whether they use it to create new users, reset…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

passwords or generate child domains. Tools like Active Directory Users and Computers and Active Directory Administrative Center get the job done, but they’re based on a GUI and require a lot of manual manipulation.  

Active Directory is suitable for automation — it’s an area where admins make constant, and often repetitive modifications, such as creating users, computers and organizational units. With the right tools in place, you can use PowerShell to automate Active Directory tasks and eliminate a lot of these recurring steps.

Install the AD module

There are a few steps to take before you can automate Active Directory. First, install the Remote Server Administration Tools package, which is specific to your OS version.

After the installation, enable the AD module. Go to Programs and Features in the Control Panel and follow this path: Remote Server Administration Tools > Role Administration Tools > AD DS and AD LDS Tools > Active Directory Module for Windows PowerShell.

Once the AD module is enabled, open the PowerShell console and use the Get-Command cmdlet to check that every command is available to you.

PS> Get-Command -Module ActiveDirectory

CommandType     Name                                               Version    Source
———–     —-                                               ——-    ——
Cmdlet          Add-ADCentralAccessPolicyMember                    1.0.0.0    ActiveDirectory
Cmdlet          Add-ADComputerServiceAccount                       1.0.0.0    ActiveDirectory
Cmdlet          Add-ADDomainControllerPasswordReplicationPolicy    1.0.0.0    ActiveDirectory
Cmdlet          Add-ADFineGrainedPasswordPolicySubject             1.0.0.0    ActiveDirectory
….

Active Directory is suitable for automation — an area where admins make constant, and often repetitive modifications, such as creating users, computers and organizational units.

Next, run the Update-Help command to download the latest documentation for each PowerShell command. Microsoft regularly updates the comprehensive PowerShell help system. Running the Update-Help command is a worthwhile step for administrators who are new to PowerShell, especially when exploring a new module.

Now that the AD module is ready to go, there are a few common ways to automate Active Directory jobs.

How to find users

To adjust settings for a user, you need to find the user. There are several ways to do this in Active Directory, but the most common is with the Get-AdUser cmdlet. This cmdlet enables you to search based either on the name of the user or via a filter that locates several users at once. The following example uses a filter to find users with the first name Joe:

PS> Get-AdUser -Filter ‘givenName -eq “Joe”‘

If you know the user’s name, you could use the Identity parameter:

PS> Get-AdUser -Identity ‘jjones

Create new users

The New-AdUser cmdlet creates new users and lets you specify the majority of the attributes. For example, if you want to create a new user called David Jones with a password of p@$$w0rd10, use PowerShell’s splatting feature to package several parameters to pass them to the New-AdUser cmdlet.

$NewUserParameters = @{
    ‘GivenName’ = ‘David’
    ‘Surname’ = ‘Jones’
    ‘Name’ = ‘djones’
    ‘AccountPassword’ = (ConvertTo-SecureString ‘p@$$w0rd10’ -AsPlainText -Force)
    ‘ChangePasswordAtLogon’ = $true
}

New-AdUser @NewUserParameters

Add users to groups

Another common administrative task is to add new users to groups. This is easily done with the Add-AdGroupMember cmdlet. The example below adds the user David Jones to an Active Directory group called Accounting:

Add-AdGroupMember -Identity ‘Accounting’ -Members ‘djones

Automate creation of users

We can combine these commands when the human resources department provides a CSV file that lists new users to create in Active Directory. The CSV file might look like this:

“FirstName”,”LastName”,”UserName”
“Adam”,”Bertram”,”abertram
“Joe”,”Jones”,”jjones

To create these users, write a script that invokes the New-AdUser command for each user in the CSV file. Use the built-in Import-Csv command and a foreach loop in PowerShell to go through the file and give users the same password.

Import-Csv -Path C:Employees.csv | foreach {
    $NewUserParameters = @{
        ‘GivenName’ = $_.FirstName
        ‘Surname’ = $_.LastName
        ‘Name’ = $_.UserName
        ‘AccountPassword’ = (ConvertTo-SecureString ‘p@$$w0rd10’ -AsPlainText -Force)
    }

    New-AdUser @NewUserParameters
}

These are a few basic examples of how an admin can automate Active Directory tasks with PowerShell. The Active Directory PowerShell module has many commands that enable admins to execute more complex jobs, such as permission delegation for groups. 

Next Steps

Use PowerShell to assign Office 365 licenses

Top PowerShell commands for admins

Test PowerShell scripts to code more efficiently

Powered by WPeMatico