Tag Archives: administrators

Plan your Exchange migration to Office 365 with confidence

Introduction

Choosing an Exchange migration to Office 365 is just the beginning of this process for administrators. Migrating all the content, troubleshooting the issues and then getting the settings just right in a new system can be overwhelming, especially with tricky legacy archives.

Even though it might appear that the Exchange migration to Office 365 is happening everywhere, transitioning to the cloud is not a black and white choice for every organization. On-premises servers still get the job done; however, Exchange Online offers a constant flow of new features and costs less in some cases. Administrators should also consider a hybrid deployment to get the benefits of both platforms.

Once you have determined the right configuration, you will have to choose how to transfer archived emails and public folders and which tools to use. Beyond relocating mailboxes, administrators have to keep content accessible and security a priority during an Exchange migration to Office 365.

This guide simplifies the decision-making process and steers administrators away from common issues. More advanced tutorials share the reasons to keep certain data on premises and the tricks to set up the cloud service for optimal results.

1Before the move

Plan your Exchange migration

Prepare for your move from Exchange Server to the cloud by understanding your deployment options and tools to smooth out any bumps in the road.

2After the move

Working with Exchange Online

After you’ve made the switch to Office 365’s hosted email platform, these tools and practices will have your organization taking advantage of the new platform’s perks without delay.

3Glossary

Definitions related to Exchange Server migration

Understand the terms related to moving Exchange mailboxes.

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.

IDC, Cisco survey assesses future IT staffing needs

Network engineers, architects and administrators will be among the most critical job positions to fill if enterprises are to meet their digital transformation goals, according to an IDC survey tracking future IT staffing trends.

 The survey, sponsored by Cisco, zeroed in on the top 10 technology trends shaping IT hiring and 20 specific roles IT professionals should consider in terms of expanding their skills and training. IDC surveyed global IT hiring managers and examined an estimated 2 million IT job postings to assess current and future IT staffing needs.

The survey results showed digital transformation is increasing demand for skills in a number of key technology areas, driven by the growing number of network-connected devices, the adoption of cloud services and the rise in security threats.

Intersections provide hot jobs

IDC classified the intersections of where hot technologies and jobs meet as “significant IT opportunities” for current and future IT staffing, said Mark Leary, directing analyst at Cisco Services.

“From computing and networking resources to systems software resources, lots of the hot jobs function at these intersections and take advantage of automation, AI and machine learning.” Rather than eliminating IT staff jobs, a lot of jobs take advantage of those same technologies, he added.

Organizations are preparing for future IT staffing by filling vacant IT positions from within rather than hiring from outside the company, then sending staff to training, if needed, according to the survey.

But technology workers still should investigate where the biggest challenges exist and determine where they may be most valued, Leary said.

“Quite frankly, IT people have to have greater understanding of the business processes and of the innovation that’s going on within the lines of business and have much more of a customer focus.”

The internet of things illustrates the complexity of emerging digital systems. Any IoT implementation requires from 10 to 12 major technologies to come together successfully, and the IT organization is seen as the place where that expertise lies, Leary said.

IDC’s research found organizations place a high value on training and certifications. IDC found that 70% of IT leaders believe certifications are an indicator of a candidate’s qualifications and 82% of digital transformation executives believe certifications speed innovation and new ways to support the business.

Network influences future IT staffing

IDC’s results also reflect the changes going on within enterprise networking.

Digital transformation is raising the bar on networking staffs, specifically because it requires enterprises to focus on newer technologies, Leary said. The point of developing skills in network programming, for example, is to work with the capabilities of automation tools so they can access analytics and big data.

This isn’t something that’s evolutionary; it’s revolutionary.
Mark Learydirecting analyst, Cisco Services

In 2015, only one in 15 Cisco-certified workers viewed network programming as critical to his or her job. By 2017, the percentage rose to one in four. “This isn’t something that’s evolutionary; it’s revolutionary,” Leary said.

While the traditional measure of success was to make sure the network was up and running with 99.999% availability, that goal is being replaced by network readiness, Leary said. “Now you need to know if your network is ready to absorb new applications or that new video stream or those new customers we just let on the network.”

Leary is involved with making sure Cisco training and certifications are relevant and matched to jobs and organizational needs, he said. “We’ve been through a series of enhancements for the network programmability training we offer, and we continually add things to it,” he added. Cisco also monitors customers to make sure they’re learning about the right technologies and tools rather than just deploying technologies faster.

To meet the new networking demands, Cisco is changing its CCNA, CCNP and CCIE certifications in two different ways, Leary said. “We’ve developed a lot of new content that focuses on cybersecurity, network programming, cloud interactions and such because the person who is working in networking is doing that,” he said. The other emphasis is to make sure networking staff understands language of other groups like software developers.

Meltdown and Spectre bugs dominate January Patch Tuesday

Administrators have their work cut out for them on multiple fronts after a serious security flaw surfaced that affects most operating systems and devices.

The Meltdown and Spectre vulnerabilities encompass most modern CPUs — from Intel-based server systems to ARM processors in mobile phones — that could allow an attacker to pull sensitive data from memory. Microsoft mitigated the flaws with several out-of-band patches last week, which have been folded into the January Patch Tuesday cumulative updates. Full protection from the exploits will require a more concerted effort from administrators, however.

Researchers only recently discovered the flaws that have existed for approximately 20 years. The Meltdown (CVE-2017-5754) and Spectre (CVE-2017-5753 and CVE-2017-5715) exploits target the CPU’s pre-fetch functionality that anticipates the feature or code the user might use, which puts relevant data and instructions into memory. A CPU exploit written in JavaScript from a malicious website could pull sensitive information from the memory of an unpatched system.

“You could leak cookies, session keys, credentials — information like that,” said Jimmy Graham, director of product management for Qualys Inc., based in Redwood City, Calif.

In other January Patch Tuesday releases, Microsoft updated the Edge and Internet Explorer browsers to reduce the threat from Meltdown and Spectre attacks. Aside from these CPU-related fixes, Microsoft issued patches for 56 other vulnerabilities with 16 rated as critical, including a zero-day exploit in Microsoft Office (CVE-2018-0802).

Microsoft’s attempt to address the CPU exploits had an adverse effect on some AMD systems, which could not boot after IT applied the patches. This issue prompted the company to pull those fixes until it produces a more reliable update.

Most major cloud providers claim they have closed this security gap, but administrators of on-premises systems will have to complete several deployment stages to fully protect their systems.

“This is a nasty one,” said Harjit Dhaliwal, a senior systems administrator in the higher education sector who handles patching for his environment. “This is not one of your normal vulnerabilities where you just have a patch and you’re done. Fixing this involves a Microsoft patch, registry entries and firmware updates.”

Administrators must ensure they have updated their anti-virus product so  it has the proper registry setting otherwise they cannot apply the Meltdown and Spectre patches. Windows Server systems require a separate registry change to enable the protections from Microsoft’s Meltdown and Spectre patches. The IT staff must identify the devices under their purview and collect that information to gather any firmware updates from the vendor. Firmware updates will correct two exploits related to Spectre. Microsoft plugged the Meltdown vulnerability with code changes to the kernel.

Dhaliwal manages approximately 5,000 Windows systems, ranging from laptops to Windows Server systems, with some models several years old. He is exploring a way to automate the firmware collection and deployment process, but certain security restrictions make this task even more challenging. His organization requires BitLocker on all systems, which must be disabled to apply a firmware update, otherwise he could run into encryption key problems.

“This is not going to be an overnight process,” Dhaliwal said.

How expansive is Meltdown and Spectre?

Attacks that use the Meltdown and Spectre exploit a bug with how many CPUs execute address space layout randomization. The difference between the two vulnerabilities is the kind of memory that is presented to the attacker. Exploits that use the flaws can expose data that resides in the system’s memory, such as login information from a password manager.

Microsoft noted Meltdown and Spectre exist in many processors — Intel, AMD and ARM — and other operating systems, including Google Android and Chrome, and Apple iOS and macOS.  Apple reportedly has closed the vulnerabilities in its mobile phones, while the status of Android patching varies depending on the OEM. Meltdown only affects Intel processors, and the Spectre exploit works with processors from Intel, AMD and ARM, according to researchers.

Virtualized workloads may require fine-tuning

Some administrators have confirmed early reports that the Meltdown and Spectre patches from Microsoft affect system performance.

 Dave Kawula, principal consultant at TriCon Elite Consulting, applied the updates to his Windows Server 2016 setup and ran the VM Fleet utility, which runs a stress test with virtualized workloads on Hyper-V and the Storage Spaces Direct pooled storage feature. The results were troubling, with preliminary tests showing a performance loss of about 35%, Kawula said.

 “As it stands, this is going to be a huge issue,” he said. “Administrators better rethink all their virtualization farms, because Meltdown and Spectre are throwing a wrench into all of our designs.”

Intel has been updating its BIOS code since the exploits were made public, and the company will likely refine its firmware to reduce the impact from the fix, Graham said.

For more information about the remaining security bulletins for January Patch Tuesday, visit Microsoft’s Security Update Guide.

Tom Walat is the site editor for SearchWindowsServer. Write to him at twalat@techtarget.com or follow him @TomWalatTT on Twitter.

How does Data Protection Manager 2016 save and restore data?

on the DPM server. But administrators have flexibility to put those backups on storage that is located — and partitioned — elsewhere.

To get started, IT administrators install a DPM agent on every computer to protect, then add that machine to a protection group in DPM. A protection group is a collection of computers that all share the same protection settings or configurations, such as the group name, protection policy, disk target and replica method.

After the agent installation and configuration process, DPM produces a replica for every protection group member, which can include volumes, shares, folders, Exchange storage groups and SQL Server databases. System Center Data Protection Manager 2016 builds replicas in a provisioned storage pool.

After DPM generates the initial replicas, its agents track changes to the protected data and send that information to the DPM server. DPM will then use the change journal to update the file data replicas at the intervals specified by the configuration. During synchronization, any changes are sent to the DPM server, which applies them to the replica.

DPM also periodically checks the replica for consistency with block-level verification and corrects any problems in the replica. Administrators can set recovery points for a protection group member to create multiple recoverable versions for each backup.

Application data backups require additional planning

Application data protection can vary based on the application and the selected backup type. Administrators need to be aware that certain applications do not support every DPM backup type. For example, Microsoft Virtual Server and some SQL Server databases do not support incremental backups.

Administrators need to be aware that certain applications do not support every DPM backup type.

For a synchronization job, System Center Data Protection Manager 2016 tracks application data changes and moves them to the DPM server, similar to an incremental backup. Updates are combined with the base replica to form the complete backup.

For an express full backup job, System Center Data Protection Manager 2016 uses a complete Volume Shadow Copy Service snapshot, but transfers only changed blocks to the DPM server. Each full backup creates a recovery point for the application’s data.

Generally, incremental synchronizations are faster to backup but can take longer to restore. To balance the time needed to restore content, DPM will periodically create full backups to integrate any collected changes, which speeds up a recovery. DPM can support up to 64 recovery points for each member of a protection group. However, DPM can also support up to 448 full backups and 96 incremental backups for each full backup.

The DPM recovery process is straightforward regardless of the backup type or target. Administrators select the desired recovery point with the Recovery Wizard in the DPM Administrator Console. DPM will restore the data from that point to the desired target or destination. The Recovery Wizard will denote the location and availability of the backup media. If the backup media — such as tape — is not available, the restoration process will fail.

Prevent Exchange Server virtualization deployment woes

are other measures administrators should take to keep the email flowing.

In my work as a consultant, I find many customers get a lot of incorrect information about virtualizing Exchange. These organizations often deploy Exchange on virtual hardware in ways that Microsoft does not support or recommend, which results in major performance issues. This tip will explain the proper way to deploy Exchange Server on virtual hardware and why it’s better to avoid cutting-edge hypervisor features.

When is Exchange Server virtualization the right choice?

The decision to virtualize a new Exchange deployment would be easy if the only concerns were technical. This choice gets difficult when politics enter the equation.

Email is one of the more visible services provided by an IT department. Apart from accounting systems, companies rely on email services more than other information technology. Problems with email availability can affect budgets, jobs — even careers.  

Some organizations spend a sizable portion of the IT department budget on the storage systems that run under the virtual platform. It may be a political necessity to use those expensive resources for high-visibility services such as messaging even when it is less expensive and overall a better technical answer to deploy Exchange on dedicated hardware. While I believe that the best Exchange deployment is almost always done on physical hardware — in accordance with the Preferred Architecture guidelines published by the Exchange engineering team — a customer’s requirements might steer the deployment to virtualized infrastructure.

How do I size my virtual Exchange servers?

Microsoft recommends sizing virtual Exchange servers the same way as physical Exchange servers. My recommendations for this procedure are:

  • Use the Exchange Server Role Requirements Calculator as if the intent was to build physical servers.
  • Take the results, and create virtual servers that are as close as possible to the results from the calculator.
  • Turn off any advanced virtualization features in the hypervisor.

Why should I adjust the hypervisor settings?

Some hypervisor vendors say that the X or Y feature in their product will help the performance or stability of virtualized Exchange. But keep in mind these companies want to sell a product. Some of those add-on offerings are beneficial, some are not. I have seen some of these vaunted features cause terrible problems in Exchange. In my experience, most stable Exchange Server deployments do not require any fancy virtualization features.

What virtualization features does Microsoft support?

Microsoft’s support statement for virtualization of Exchange 2016 is lengthy, but the essence is to make the Exchange VMs as close to physical servers as possible.

Microsoft does not support features that move a VM from one host to another unless the failover event results in cold boot of the Exchange Server. The company does not support features that allow resource sharing among multiple VMs of virtualized Exchange.

Where are the difficulties with Exchange Server virtualization?

The biggest problem with deploying Exchange on virtual servers is it’s often impossible to follow the proper deployment procedures, specifically with the validation of storage IOPS of a new Exchange Server with Jetstress. This tool checks that the storage hardware delivers enough IOPS to Exchange for a smooth experience.

Generally, a virtual host will use shared storage for the VMs it hosts. Running Jetstress on a new Exchange VM on that storage setup will cause an outage for other servers and applications. Due to this shared arrangement, it is difficult to gauge whether the storage equipment for a virtualized Exchange Server will provide sufficient performance.  

While it’s an acceptable practice to run Exchange Server on virtual hardware, I find it often costs more money and performs worse than a physical deployment. That said, there are often circumstances outside of the control of an Exchange administrator that require the use of virtualization.

To avoid trouble, try not to veer too far from Microsoft’s guidelines. The farther you stray from the company’s recommendations, the more likely you are to have problems.

December Patch Tuesday closes year on a relatively calm note

Administrators were greeted with a subdued December Patch Tuesday, a quiet end to what had been a somewhat tumultuous year early in 2017.

Of the 32 unique Common Vulnerabilities and Exposures (CVEs) that Microsoft addressed, just three patches were directly related to Windows operating systems. While not a critical exploit, the patch for CVE-2017-11885, which affects Windows client and server operating systems, is where administrators should focus their attention.

The patch is for a Remote Procedure Call (RPC) vulnerability for machines with the Routing and Remote Access service (RRAS) enabled. RRAS is a Windows service that allows remote workers to use a virtual private network to access internal network resources, such as files and printers.

“Anyone who has RRAS enabled is going to want to deploy the patch and check other assets to make sure RRAS is not enabled on any devices that don’t use it actively to prevent the exploitation,” said Gill Langston, director of product management at Qualys Inc., based in Redwood City, Calif.

The attacker triggers the exploit by running a specially crafted application against a Windows machine with RRAS enabled.

“Once the bad actor is on the endpoint, they can then install applications and run code,” Langston said. “They establish a foothold in the network, then see where they can spread. The more machines you have under your control, the more ability you have to move laterally within the organization.”

In addition, desktop administrators should roll out updates promptly to apply 19 critical fixes that affect the Internet Explorer and Edge browsers, Langston said.

“The big focus should be on browsers because of the scripting engine updates Microsoft seems to release every month,” he said. “These are all remote-code execution type vulnerabilities, so they’re all critical. That’s obviously a concern because that’s what people are using for browsing.”

Fix released for Windows Malware Protection Engine flaw

On Dec. 6, Microsoft sent out an update to affected Windows systems for a Windows Malware Protection Engine vulnerability (CVE-2017-11937). This emergency repair closed a security hole in Microsoft’s antimalware application, affecting systems on Windows 7, 8.1 and 10, and Windows Server 2016. Microsoft added this correction to the December Patch Tuesday updates.

“The fix happened behind the scenes … but it was recommended [for] administrators using any version of the Malware Protection Engine that it’s set to automatically update definitions and verify that they’re on version 1.1.14405.2, which is not vulnerable to the issue,” Langston said.

OSes that lack the update are susceptible to a remote-code execution exploit if the Windows Malware Protection Engine scanned a specially crafted file, which would give the attacker a range of access to the system. That includes the ability to view and delete data, and create a new account with full user rights.

Other affected Microsoft products include Exchange Server 2013 and 2016, Microsoft Forefront Endpoint Protection, Microsoft Security Essentials, Windows Defender and Windows Intune Endpoint Protection.

“Microsoft uses the Forefront engine to scan incoming email on Exchange 2013 and Exchange 2016, so they were part of this issue,” Langston said.

Lessons learned from WannaCry

Microsoft in May surprised many in IT when the company released patches for unsupported Windows XP and Windows Server 2003 systems to stem the tide of WannaCry ransomware attacks. Microsoft had closed this exploit for supported Windows systems in March, but it took the unusual step of releasing updates for OSes that had reached end of life.

Many of the Windows malware threats from early 2017 spawned from exploits found in the Server Message Block (SMB) protocol, which is used to share files on the network. The fact that approximately 400,000 machines got bit by the ransomware bug showed how difficult it is for IT to keep up with patching demands.

“WannaCry woke people back up to how critical it is to focus on your patch cycles,” Langston said.

More than three months elapsed between the time Microsoft first patched the SMB vulnerability in March that WannaCry exploited and when the Petya ransomware — which used the same SMB exploit — continued to compromise people. Some administrators might be lulled into a false sense of security from the cumulative update servicing model and delay the patching process, Langston said.

“They may delay because the next rollup will cover the updates they missed, but then that’s more time those machines are unprotected,” he said.

For more information about the remaining security bulletins for December Patch Tuesday, visit Microsoft’s Security Update Guide.

Tom Walat is the site editor for SearchWindowsServer. Write to him at twalat@techtarget.com or follow him @TomWalatTT on Twitter.

Monitor Active Directory replication via PowerShell

breaks down, administrators need to know quickly to prevent issues with the services and applications that Active Directory oversees.

It is important to monitor Active Directory replication to ensure the process remains healthy. Larger organizations that use Active Directory typically have several domain controllers that rely on replication to synchronize networked objects — users, security groups, contacts and other information — in the Active Directory database. Changes in the database can be made at any domain controller, which must then be duplicated to the other domain controllers in an Active Directory forest. If the changes are not synchronized to a particular domain controller — or all domain controllers — in an Active Directory site, users in that location might encounter problems.

For example, if an administrator applies a security policy setting via a Group Policy Object to all workstations, all domain controllers in a domain should pick up the GPO changes. If one domain controller in a particular location fails to receive this update, users in that area will not receive the security configuration.

Why does Active Directory replication break?

Active Directory replication can fail for several reasons. If network ports between the domain controllers are not open or if the connection object is missing from a domain controller, then the synchronization process generally stops working.

Since domain controllers rely on the domain name system, if their service records are missing, the domain controllers will not communicate with each other, which causes a replication failure.

Check Active Directory replication status manually

There are many ways to check the Active Directory replication status manually.

Administrators can run the following string using the command-line repadmin utility to show the replication errors in the Active Directory forest:
repadmin /replsum /bysrc /bydest /errorsonly

Administrators can also use the Get-ADReplicationPartnerMetadata PowerShell cmdlet to check the replication status, which is used in the script further in this article.

Use a script to check replication health

While larger organizations might have an enterprise tool, such as System Center Operations Manager, to monitor Active Directory, a PowerShell script can be a helpful supplement to alert administrators on the replication status. Because so much of a business relies on a properly functioning Active Directory system, it can’t hurt to implement this script and have it run every day via a scheduled task. If the script finds an error, it will send an alert via email.

The system must meet a few requirements before executing the script:

  • It runs on a computer that reaches all domain controllers.
  • It is recommended to use a computer that runs Windows Server 2012 R2 or a Windows 10 computer joined to a domain in the Active Directory forest.
  • The computer has the Active Directory PowerShell modules installed.

How does the script work?

The PowerShell script uses the Get-ADReplicationPartnerMetadata cmdlet, which connects to a primary domain controller emulator in the Active Directory forest and then collects the replication metadata for each domain controller.

The script checks the value of the LastReplicationResult attribute for each domain controller entry. If the value of LastReplicationResult is zero for any domain controller, the script considers this a replication failure. If this error is found, the script executes the Send-MailMessage cmdlet to send the email with a copy of the report file in a CSV file. The script stores the replication report in C:TempReplStatus.CSV.

The settings in the script should be modified to use the email address to send the message along with the subject line and message body.

PowerShell script to check replication status

The following PowerShell script helps admins monitor Active Directory for these replication errors and delivers the findings via email. Be sure to modify the email settings in the script.

$ResultFile = “C:TempReplStatus.CSV”

$ADForestName = “TechTarget.com”

$GetPDCNow =Get-ADForest $ADForestName | Select-Object -ExpandProperty RootDomain | Get-ADDomain | Select-Object -Property PDCEmulator

$GetPDCNowServer = $GetPDCNow.PDCEmulator

$FinalStatus=”Ok”

 

Get-ADReplicationPartnerMetadata -Target * -Partition * -EnumerationServer $GetPDCNowServer -Filter {(LastReplicationResult -ne “0”)} | Select-Object LastReplicationAttempt, LastReplicationResult, LastReplicationSuccess, Partition, Partner, Server | Export-CSV “$ResultFile” -NoType -Append -ErrorAction SilentlyContinue

 

$TotNow = GC $ResultFile

$TotCountNow = $TotNow.Count

IF ($TotCountNow -ge 2)

{

    $AnyOneOk = “Yes”

    $RCSV = Import-CSV $TestCSVFile

    ForEach ($AllItems in $RCSV)

    {

        IF ($AllItems.LastReplicationResult -eq “0”)

        {

            $FinalStatus=”Ok”

            $TestStatus=”Passed”

            $SumVal=””

            $TestText=”Active Directory replication is working.”

        }

        else

        {

            $AnyGap = “Yes”

            $SumVal = “”

            $TestStatus = “Critical”

            $TestText=”Replication errors occurred. Active Directory domain controllers are causing replication errors.”

            $FinalStatus=”NOTOK”           

            break

        }

    }

}

$TestText

 

IF ($FinalStatus -eq “NOTOK”)

{

    ## Since some replication errors were reported, start email procedure here…

 

### START – Modify Email parameters here

$message = @”                                

Active Directory Replication Status

 

Active Directory Forest: $ADForestName

                                  

Thank you,

PowerShell Script

“@

 

$SMTPPasswordNow = “PasswordHere”

$ThisUserName = “UserName”

$MyClearTextPassword = $SMTPPasswordNow

$SecurePassword = Convertto-SecureString –String $MyClearTextPassword –AsPlainText –force

$ToEmailNow =”EmailAddressHere”

$EmailSubject = “SubjectHere”

$SMTPUseSSLOrNot = “Yes”

$SMTPServerNow = “SMTPServerName”

$SMTPSenderNow = “SMTPSenderName”

$SMTPPortNow = “SMTPPortHere”

 

### END – Modify Email parameters here

 

$AttachmentFile = $ResultFile

 

$creds = new-object -typename System.Management.Automation.PSCredential -argumentlist “$ThisUserName”, $SecurePassword

Send-MailMessage -Credential $Creds -smtpServer $SMTPServerNow -from $SMTPSenderNow -Port $SMTPPortNow -to $ToEmailNow -subject $EmailSubject -attachment $AttachmentFile -UseSsl -body $message

}

When the script completes, it generates a file that details the replication errors.

Replication error report
The PowerShell script compiles the Active Directory replication errors in a CSV file and delivers those results via email.

Administrators can run this script automatically through the Task Scheduler. Since the script takes about 10 minutes to run, it might be best to set it to run at a time when it will have the least impact, such as midnight.

With AI-based cloud management tools, context is king

Administrators who struggle to get deeper insight into cloud infrastructure and application performance have a new ally: artificial intelligence.

Some emerging and legacy IT vendors have infused AI technology into their cloud management tools. While their feature sets — such as the ability to analyze host performance, optimize costs and set up alerts — sound similar to those found in more traditional third-party management tools, these AI-based platforms reach a new level of sophistication, providing greater granularity and broader context, according to IT pros.

Travis Perkins PLC, a retail provider for the home improvement and construction markets based in the U.K., uses Dynatrace’s AI-based performance monitoring platform for its on-premises and Amazon Web Services (AWS) environments. Rather than focus on higher-level metrics related to host servers or instances, the tool reports more granularly on aspects like Java runtime code and errors, said Abdul Rahman Al-Tayib, e-commerce DevOps team leader at the company. This enables his team to perform faster and more precise root-cause analysis when something goes wrong, and better assess the overall impact any issues might have on the business.

“When it comes down to investigating or looking into specific elements of performance where we have had challenges, rather than having to do the investigation manually, [Dynatrace combines] it all into one report,” Al-Tayib said. “So, it tells you, ‘This service here failed to fire, and, therefore, it caused this series of events, which was then related back to [a disruption at your] customer.’ You can immediately see where the challenge is.”

To initiate this root-cause analysis, users install a Dynatrace agent on their host machine to identify the various dependencies between resources and help correlate certain events with any issues that arise, explained Alois Reitbauer, chief technology strategist at the company, based in Waltham, Mass.

“If you have a host that is running out of CPU, and the service running on that host has a response-time problem, [the tool can tell] these are related to each other,” Reitbauer said.

More sophisticated anomaly detection, or identifying when an IT service is performing in an abnormal way, is another feature that makes AI-based management tools stand out. To do this, the Dynatrace tool performs auto-baselining — an automatic process that assesses baseline, or standard, system performance by applying different algorithms for metrics such as response time, failure rate and throughput.

After the tool extrapolates what normal performance looks like, it alerts IT teams to any deviations from that behavior. To avoid being bombarded with alerts, users can further specify performance thresholds, and the tool also applies algorithms to assess criticality.

“If I have two hosts that have infrastructure problems … I obviously care more about the problem that might be with a checkout function for a cart in an e-commerce application than the other one that maybe does some background batch processing,” Reitbauer said. “[That] user context, from an infrastructure case, is of main importance.”

This ability for AI-powered cloud management tools to weed out noncritical alerts has been a boon to other users, as well. According to a network and infrastructure capacity planner at a cloud storage provider that uses AWS for its back-end infrastructure, that capability was one of the main reasons his company adopted an AI-based cloud management tool called YotaScale.

[An AI-enabled cloud management tool] tells you, ‘This service here failed to fire, and, therefore, it caused this series of events, which was then related back to [a disruption at your] customer.’ You can immediately see where the challenge is.
Abdul Rahman Al-Tayibe-commerce DevOps team leader, Travis Perkins PLC

The capacity planner, who asked to remain anonymous, conducted evaluations on several third-party cloud management tools, but found that YotaScale allowed him to “suppress a lot of the noise” that can come with those tools’ alerts and recommendations.

For example, a company might spin up some AWS instances for a new research and development project, and those instances tend to have low utilization as the project ramps up, he said. Third-party cloud management tools might recommend to right-size those instances or reserve them via an AWS Reserved Instance, but in this case, those suggestions are irrelevant.

“That’s not how we would really do things in a bootstrapping scenario, where we are trying to bring up a new test or project, and so I’m going to ignore those,” he said.

The benefit of the AI layer in tools such as YotaScale is to analyze IT infrastructure through the lens of various business departments or units, according to the Menlo Park, Calif., company’s CEO, Asim Razzaq. In the example above, that’s through the lens of a research and development team.

“We map that enterprise, organizational way of looking at things to the infrastructure,” Razzaq said. “And then, within that context, deliver optimization [recommendations] and anomaly detection.”

The YotaScale tool achieves this business context via user input. Users adjust certain parameters and dismiss recommendations that don’t fit, teaching the tool to detect what’s most relevant over time.

AI replacing humans? Not so fast

One overarching benefit of these AI-based cloud management tools is they reduce the need for humans to perform a lot of this analysis on their own. But even the most sophisticated tools won’t provide the same level of insight — at least not yet — as an IT professional with 20 years of industry experience, said Chris Wilder, analyst at Moor Insights & Strategy.

“These algorithms will be smarter and smarter based on the anomalies they find, but they still don’t have the experience a person would,” Wilder said. “Data, in my opinion, is not a replacement for human expertise. It’s just something to augment it.”

These AI capabilities are still in their early phases, agreed Jay Lyman, analyst at 451 Research. But they will eventually become a must-have for infrastructure management tool vendors.

“We’ll get to a point before too long where every provider is going to have to have some sort of machine learning and AI in their automation,” Lyman said. “I think it will become pretty much a check-box item.”

Will PowerShell Core 6 fill in missing features?

Administrators who have embraced PowerShell to automate tasks and manage systems will need to prepare themselves…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

as Microsoft plans to focus its energies in the open source version called PowerShell Core.

All signs from Microsoft indicate it is heading away from the Windows-only version of PowerShell, which the company said it will continue to support with critical fixes — but no further upgrades. The company plans to release PowerShell Core 6 shortly. Here’s what admins need to know about the transition.

What’s different with PowerShell Core?

PowerShell Core 6 is an open source configuration management and automation tool from Microsoft. As of this article’s publication, Microsoft made a release candidate available in November. PowerShell Core 6 represents a significant change for administrators because it shifts from a Windows-only platform to accommodate heterogeneous IT shops and hybrid cloud networks. Microsoft’s intention is to give administrative teams a single tool to manage Linux, macOS and Windows systems.

What features are not in PowerShell Core?

PowerShell Core runs on .NET Core and uses .NET Standard 2.0, the latter is a common library that helps make some current Windows PowerShell modules work in PowerShell Core.

As a subset of the .NET Framework, PowerShell Core misses out on some useful features in Windows PowerShell. For example, workflow enables admins to execute tasks or retrieve data through a sequence of automated steps. This feature is not in PowerShell Core 6. Similarly, tasks such as sequencing, checkpointing, resumability and persistence are not available in PowerShell Core.

A few other features missing from PowerShell Core 6 are:

  • Windows Presentation Foundation: This is the group of .NET libraries that enable coders to build UIs for scripts. It offers a common platform for developers and designers to work together with standard tools to create Windows and web interfaces.
  • Windows Forms: In PowerShell 5.0 for Windows, the Windows Forms feature provides a robust platform to build rich client apps with the GUI class library on the .NET Framework. To create a form, the admin loads the System.Windows.Forms assembly, creates a new object of type system.windows.forms and calls the ShowDialog method. With PowerShell Core 6, administrators lose this capability.
  • Cmdlets: As of publication, most cmdlets in Windows PowerShell have not been ported to PowerShell Core 6. However, the compatibility with .NET assemblies enables admins to use the existing modules. Users on Linux are limited to modules mostly related to security, management and utility. Admins on that platform can use the PowerShellGet in-box module to install, update and discover PowerShell modules. PowerShell Web Access is not available for non-Windows systems because it requires Internet Information Services, the Windows-based web server functionality.
  • PowerShell remoting: Microsoft ports Secure Socket Shell to Windows, and SSH is already popular in other environments. That means SSH-based remoting for PowerShell is likely the best option for remoting tasks. Modules such as Hyper-V, Storage, NetTCPIP and DnsClient have not been ported to PowerShell Core 6, but Microsoft plans to add them.

Is there a new scripting environment?

For Windows administrators, the PowerShell Integrated Scripting Environment (ISE) is a handy editor that admins use to write, test and debug commands to manage networks. But PowerShell ISE is not included in PowerShell Core 6, so administrators must move to a different integrated development environment.

Microsoft recommends admins use Visual Studio Code (VS Code). VS Code is a cross-platform tool and uses web technologies to provide a rich editing experience across many languages. However, VS Code lacks some of PowerShell ISE’s features, such as PSEdit and remote tabs. PSEdit enables admins to edit files on remote systems without leaving the development environment. Despite VS Code’s limitations, Windows admins should plan to migrate from PowerShell ISE and familiarize themselves with VS Code.

What about Desired State Configuration?

Microsoft offers two versions of Desired State Configuration: Windows PowerShell DSC and DSC for Linux. DSC helps administrators maintain control over software deployments and servers to avoid configuration drift.

Microsoft plans to combine these two options into a single cross-platform version called DSC Core, which will require PowerShell Core and .NET Core. DSC Core is not dependent on Windows Management Framework (WMF) and Windows Management Instrumentation (WMI) and is compatible with Windows PowerShell DSC. It supports resources written in Python, C and C++.

Debugging in DSC has always been troublesome, and ISE eased that process. But with Microsoft phasing out ISE, what should admins do now? A Microsoft blog says the company uses VS Code internally for DSC resource development and plans to release instructional videos that explain how to use the PowerShell extension for DSC resource development.

PowerShell Core 6 is still in its infancy, but Microsoft’s moves show the company will forge ahead with its plan to replace Windows PowerShell. This change brings a significant overhaul to the PowerShell landscape, and IT admins who depend on this automation tool should pay close attention to news related to its development.

Dig Deeper on Microsoft Windows Scripting Language