Category Archives: Expert advice on Windows based systems and hardware

Expert advice on Windows based systems and hardware

How to tackle an email archive migration for Exchange Online

Problem solve
Get help with specific problems with your technologies, process and projects.

A move from on-premises Exchange to Office 365 also entails determining the best way to transfer legacy archives. This tutorial can help ease migration complications.


A move to Office 365 seems straightforward enough until project planners broach the topic of the email archive…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

migration.

Not all organizations keep all their email inside their messaging platform. Many organizations that archive messages also keep a copy in a journal that is archived away from user reach for legal reasons.

The vast majority of legacy archive migrations to Office 365 require third-party tools and must follow a fairly standardized process to complete the job quickly and with minimal expense. Administrators should migrate mailboxes to Office 365 first and then the archive for the fastest way to gain benefits from Office 365 before the archive reingestion completes.

An archive product typically scans mailboxes for older items and moves those to longer term, cheaper storage that is indexed and deduplicated. The original item typically gets replaced with a small part of the message, known as a stub or shortcut. The user can find the email in their inbox and, when they open the message, an add-in retrieves the full content from the archive.

Options for archived email migration to Office 365

The native tools to migrate mailboxes to Office 365 cannot handle an email archive migration. When admins transfer legacy archive data for mailboxes, they usually consider the following three approaches:

  1. Export the data to PST archives and import it into user mailboxes in Office 365.
  2. Reingest the archive data into the on-premises Exchange mailbox and then migrate the mailbox to Office 365.
  3. Migrate the Exchange mailbox to Office 365 first, then perform the email archive migration to put the data into the Office 365 mailbox.

Option 1 is not usually practical because it takes a lot of manual effort to export data to PST files. The stubs remain in the user’s mailbox and add clutter.

Option 2 also requires a lot of labor-intensive work and uses a lot of space on the Exchange Server infrastructure to support reingestion.

That leaves the third option as the most practical approach, which we’ll explore in a little more detail.

Migrate the mailbox to Exchange Online

When you move a mailbox to Office 365, it migrates along with the stubs that relate to the data in the legacy archive. The legacy archive will no longer archive the mailbox, but users can access their archived items. Because the stubs usually contain a URL path to the legacy archive item, there is no dependency on Exchange to view the archived message.

Some products that add buttons to restore the individual message into the mailbox will not work; the legacy archive product won’t know where Office 365 is without further configuration. This step is not usually necessary because the next stage is to migrate that data into Office 365.

Transfer archived data

Legacy archive solutions usually have a variety of policies for what happens with the archived data. You might configure the system to keep the stubs for a year but make archive data accessible via a web portal for much longer.

There are instances when you might want to replace the stub with the real message. There might be data that is not in the user’s mailbox as a stub but that users want on occasion.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly. The legacy archive migration software should examine the data within the archive and then run batch jobs to replace stubs with the full messages. In this case, you can use the Exchange Online archive as a destination for archived data that no longer has a stub.

Email archive migration software connects via the vendor API. The software assesses the items and then exports them into a common temporary format — such as an EML file — on a staging server before connecting to Office 365 over a protocol such as Exchange Web Services. The migration software then examines the mailbox and replaces the stub with the full message.

migration dashboard
An example of a third-party product’s dashboard detailing the migration progress of a legacy archive into Office 365.

Migrate journal data

With journal data, the most accepted approach is to migrate the data into the hidden recoverable items folder of each mailbox related to the journaled item. The end result is similar to using Office 365 from the day the journal began, and eDiscovery works as expected when following Microsoft guidance.

For this migration, the software scans the journal and creates a database of the journal messages. The application then maps each journal message to its mailbox. This process can be quite extensive; for example, an email sent to 1,000 people will map to 1,000 mailboxes.

After this stage, the software copies each message to the recoverable items folder of each mailbox. While this is a complicated procedure, it’s alleviated by software that automates the job.

Legacy archive migration offerings

There are many products tailored for an email archive migration. Each has its own benefits and drawbacks. I won’t recommend a specific offering, but I will mention two that can migrate more than 1 TB a day, which is a good benchmark for large-scale migrations. They also support chain of custody, which audits the transfer of all data

TransVault has the most connectors to legacy archive products. Almost all the migration offerings support Enterprise Vault, but if you use a product that is less common, then it is likely that TransVault can move it. The TransVault product accesses source data either via an archive product’s APIs or directly to the stored data. TransVault’s service installs within Azure or on premises.

Quadrotech Archive Shuttle fits in alongside a number of other products suited to Office 365 migrations and management. Its workflow-based process automates the migration. Archive Shuttle handles fewer archive sources, but it does support Enterprise Vault. Archive Shuttle accesses source data via API and agent machines with control from either an on-premises Archive Shuttle instance or, as is more typical, the cloud version of the product.

Dig Deeper on Exchange Online administration and implementation

How to build a Packer image for Azure

Get started
Bring yourself up to speed with our introductory content.

Packer is an open source tool that automates the Windows Server image building process to give administrators a consistent approach to create new VMs.


For admins who prefer to roll their own Windows Server image, despite the best of intentions, issues can arise…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

from these handcrafted builds.

To maintain some consistency — and avoid unnecessary help desk tickets — image management tools such as Packer can help construct golden images tailored for different needs. The Packer image tool automates the building process and helps admins manage Windows Server images. Packer offers a way to script the image construction process to produce builds through automation for multiple platforms at the same time. Admins can use code repositories to store validated Packer image configurations that admins across different locations can share to ensure stability across builds.

Build a Packer image for Azure

To demonstrate how Packer works, we’ll use it to build a Windows Server image. To start, download and install Packer for the operating system of choice. Packer offers an installation guide on its website.

Next, we need to figure out where to create the image. A Packer feature called builders creates images for various services, such as Azure, AWS, Docker, VMware and more. This tutorial will explain how to build a Windows Server image to run in Azure.

To construct an image for Azure, we have to meet a few prerequisites. You need:

  • a service principal for Packer to authenticate to Azure;
  • a storage account to hold the image;
  • the resource group name for the storage account;
  • the Azure subscription ID;
  • the tenant ID for your Azure Active Directory; and
  • a storage container to place the VHD image.

Validate the Windows Server build instructions

A Packer feature called builders creates images for various services, such as Azure, AWS, Docker, VMware and more.

Next, it’s time to set up the image template. Every Packer image requires a JSON file called a template that tells Packer how to build the image and where to put it. An example of a template that builds an Azure image is in the code below. Save it with the filename WindowsServer.Azure.json.

{
  “variables”: {
      “client_id”: “”,
      “client_secret”: “”,
      “object_id”: “”
  },
  “builders”: [{
    “type”: “azure-arm”,

    “client_id”: “{{user `client_id`}}”,
    “object_id”: “{{user `object_id`}}”,
    “client_secret”: “{{user `client_secret`}}”,
    “resource_group_name”: “labtesting”,
    “storage_account”: “adblabtesting”,
    “subscription_id”: “d660a51f-031d-4b8f-827d-3f811feda5fc”,
    “tenant_id”: “bb504844-07db-4019-b1c4-7243dfc97121”,

    “capture_container_name”: “vhds”,
    “capture_name_prefix”: “packer”,

    “os_type”: “Windows”,
    “image_publisher”: “MicrosoftWindowsServer”,
    “image_offer”: “WindowsServer”,
    “image_sku”: “2016-Datacenter”,
    “location”: “East US”,
    “vm_size”: “Standard_D2S_v3”
  }]
}

You should validate the schema before you start with the packer validate command. We don’t want sensitive information in the template, so we create the client_id and client_secret variables and pass those at runtime.

packer validate -var ‘client_id=value’ -var ‘client_secret=value’ WindowsServer.Azure.json

How to correct Packer build issues

After the command confirms the template is good, we build the image with nearly the same syntax as the validation command. For the purposes of this article, we will use placeholders for the client_id, client_secret and object_id references.

> packer build -var ‘client_id=XXXX’ -var ‘client_secret=XXXX’ -var ‘object_id=XXXX’ WindowsServer.Azure.json

When you run the build the first time, you may run into a few errors if the setup is not complete. Here are the errors that came up when I ran my build:

    • “Build ‘azure-arm’ errored: The storage account is located in eastus, but the build will take place in West US. The locations must be identical”
    • Build ‘azure-arm’ errored: storage.AccountsClient#ListKeys: Failure responding to request: StatusCode=404 – Original Error: autorest/azure: Service returned an error. Status=404 Code=”ResourceGroupNotFound” Message=”Resource group ‘adblabtesting’ could not be found.”

[embedded content]

Using Packer to build an image from another VM.

  • “==> azure-arm: ERROR: -> VMSizeDoesntSupportPremiumStorage : Requested operation cannot be performed because storage account type ‘Premium_LRS’ is not supported for VM size ‘Standard_A2’.”

The error messages are straightforward and not difficult to fix.

However, the following error message is more serious:

==> azure-arm: ERROR: -> Forbidden : Access denied
==> azure-arm:
==> azure-arm:  …failed to get certificate URL, retry(0)

This indicates the use of the wrong object_id. Find the correct one in the Azure subscription role.

After adding the right object_id, you will find a VHD image in Azure.

Dig Deeper on Windows Server deployment

How do I get access to Office 365 preview versions?


Organizations that want cutting-edge productivity features and don’t mind a little risk can receive more experimental…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

Office 365 preview versions before Microsoft distributes standard releases.

Like the Office Insider program that gives participants early access to the latest feature updates to Microsoft Office applications, a global administrator can use the Office 365 admin center to determine who gets access to the targeted release for the newest features. Microsoft calls the more widely distributed version its standard release, which is the refined service that goes to all customers.

Setting up the Office 365 preview option

Companies that participate in the targeted release program get a first look at fledgling features that can help the organization get its IT staff ready to support new functionality and determine if any security or compliance adjustments will need to be made.

By default, all users in an Office 365 tenant get the standard release. To configure the targeted release Office 365 previews for select users or the entire organization, administrators need to make adjustments in the Office 365 admin center.

To configure the targeted release Office 365 previews for select users or the entire organization, administrators need to make adjustments in the Office 365 admin center.

Sign into your Office 365 account, go to Settings and select Organization profile. Find Release preferences and click Edit. Administrators can then choose to enable or disable Office 365 targeted release.

  • To enable targeted release for all users, choose Targeted release for everyone, click Next and then select Yes to confirm.
  • To enable targeted release for selected users, choose Targeted release for selected users, click Next and then select Yes to confirm. You can then add individuals by searching for their names and clicking the plus to add them. Click Save and then Close when complete.
  • To disable targeted release, choose Standard release, click Next and then select Yes to confirm.

Administrators can also add users in bulk and add or remove users from targeted release at any time. Once the administrator makes the change, it can take up to 24 hours for the modification to take effect in the organization’s Office 365 tenant.

It’s important for administrators to follow any established business policies or procedures related to this Office 365 preview option to avoid any surprises for end users. For example, an administrator might send a notification to employees designated for the Office 365 targeted release. This note should remind users that they are using a service that is not yet in general release. The targeted release might have issues that the administrators can forward to Microsoft for the company to fix for a future release.

Organize Active Directory with these strategies


It’s a familiar refrain for many in the IT field: You start a new job and have to clean up the previous administrator’s…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

handiwork, such as their Active Directory group configuration.

You might inherit an Active Directory group strategy from an admin who didn’t think the process through, leaving you with a setup that doesn’t reflect the usage patterns of your users. Administrators who take the time to organize Active Directory organizational units and groups in a more coherent fashion will simplify their workload by making it easier to audit Active Directory identities and minimize the Active Directory attack surface.

Here are some practical tips and tricks to streamline your Active Directory (AD) administrator work and support your security compliance officers.

The traditional Active Directory design pattern

To start, always organize individual user accounts into groups. Avoid giving access permissions to individual user accounts because that approach does not scale.

Figure 1 shows Microsoft’s recommendation to organize Active Directory user accounts for resource access.

AGDLP model
Figure 1. Microsoft recommends the account, global, domain local, permission security model to organize Active Directory user accounts.

The account, global, domain local, permission (AGDLP) model uses the following workflow:

  • Organize users into global groups based on business criteria, such as department and location.
  • Place the appropriate global groups into domain local groups on resource servers based on similar resource access requirements.
  • Grant resource permissions to domain local groups only.

Note how this model uses two different scopes. Global groups organize AD users at the domain level, and domain local groups organize global groups at the access server level, such as a file server or a print server.

Employ role-based access control principles

Role-based access control (RBAC) grants access to groups based on job role. For example, consider network printer access:

  • Most users need only the ability to submit and manage their own print jobs.
  • Some users have delegated privileges to manage the entire print queue.
  • Select users have full administrative access to the printer’s hardware and software.

Microsoft helps with some of the planning work by prepopulating RBAC roles in Active Directory. For instance, installing the Domain Name Service role creates several sub-administrative groups in Active Directory.

[embedded content]

How to set up users and groups in Active Directory

Instead of relying on prebuilt groups, think about the user population and how to design global and domain local groups. Try to organize Active Directory global groups according to business rules and domain local groups based on access roles.

You might have global groups defined for each business unit at your organization, including IT, accounting, legal, manufacturing and human resources. You might also have domain local groups based on specific job tasks: print queue managers, print users, file access managers, file readers, database reporters and database developers.

When you organize Active Directory, the goals are to describe both the user population and their resource access requirements completely and accurately while you keep the number of global and domain local groups as small as possible to reduce the management workload.

Keep group nesting to a minimum if possible

You should keep group nesting to a minimum because it increases your administrative overhead and makes it more difficult to troubleshoot effective access. You should only populate global groups with individual Active Directory user accounts and only populate domain local groups with global groups.

effective access tab
Figure 2. The effective access tab displays the effective permissions for groups, users and device accounts.

The Windows Server and client operating systems have a feature called effective access, found in the advanced security settings dialog box in a file or folder’s properties sheet. You model effective access for a particular user, group or computer account from this location. But analyzing multiple folders with this feature doesn’t scale. You have to run it multiple times to analyze permissions.

In a multi-domain environment, nesting is unavoidable. Stick to single domain topologies when possible.

cross-domain resource access
Figure 3. A cross-domain resource access configuration in Active Directory offers more flexibility to the administrator.

I recommend the topology in Figure 3 because while global groups can contain Active Directory user accounts from their own domain only, you can add global groups to discretionary access control lists in any forest domain.

Here’s what’s happening in the topology in Figure 3:

  • A: Global groups represent marketing department employees in the contoso.com and corp.contoso.com domains.
  • B: We create a domain local group on our app server named Mktg App Access and populate it with both global groups.
  • C: We assign permissions on our line-of-business marketing app to the Mktg App Access domain local group.

When you need to organize Active Directory groups, develop a naming convention that makes sense to everyone on your team and stick to it.

You might wonder why there is no mention of universal groups. I avoid them because they slow down user logon times due to global catalog universal group membership lookups. Universal groups also make it easy to be sloppy during group creation and with resource access strategy.

How to design for the hybrid cloud

Microsoft offers Azure Active Directory for cloud identity services that you can synchronize with on-premises Active Directory user and group accounts, but Azure AD does not support organizational units. Azure AD uses a flat list of user and group accounts that works well for identity purposes.

With this structure in mind, proper user and group naming is paramount. You should also sufficiently populate Active Directory properties to make it easier to manage these accounts in the Azure cloud.

When you need to organize Active Directory groups, develop a naming convention that makes sense to everyone on your team and stick to it.

One common group naming pattern involves prefixes. For example, you might start all your global group names with GL_ and your domain local group names with DL_. If you use Exchange Server, then you will have distribution groups in addition to the AD security groups. In that instance, you could use the DI_ prefix.

Thorough Exchange Server testing regimen eliminates doubt


You’ve done your due diligence and bought the right hardware for your move to Exchange Server 2016. After you put…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

all the pieces together, then it’s time for a thorough Exchange Server testing regimen.

A few factors have many organizations that use on-premises Exchange planning a move to Exchange Server 2016. Exchange Server 2010 has less than two years before it leaves extended support. Exchange Server 2013 left mainstream support this year.

After the decision to move to Exchange 2016, the next stage of planning process should have involved using the Exchange Server role requirements calculator to size your deployment, creating a suitable design, then purchasing and installing the hardware to support your implementation. But this is only part of the overall effort.

There are vital areas you must test in the time after implementing Exchange Server 2016 and before migrating mailboxes. As part of the overall Exchange Server testing strategy, you should check the functionality of the storage infrastructure next.

Using Jetstress to test storage performance

Whether you run Exchange Server on physical hardware and follow Microsoft’s preferred architecture or you use virtual infrastructure, make sure your storage meets the IOPS requirements outlined in the Exchange Server role requirements calculator.

[embedded content]

A walkthrough of an Exchange Server 2016 installation

The calculator recommends the RAM and storage needed. The organization determines what type of disks to purchase and their size. However, it’s possible to buy the wrong type of disk controller or to receive a faulty drive. Eliminate any doubt about the hardware and run tests on the storage.

For this segment of Exchange Server testing, use Jetstress to generate a workload with the same Exchange Server 2016 binaries, database and log file configuration used in the Exchange Server deployment. Microsoft also supplies a Jetstress field guide to follow when planning your storage test. Microsoft developed the instructions for Exchange Server 2013, but it is supported and applicable to Exchange Server 2016.

Jetstress test
The Jetstress application checks the stability and performance of the storage infrastructure of an Exchange Server deployment.

Some might say this test is a waste of time if you purchased correctly sized hardware, but it’s better to get confirmation. Jetstress can fail the storage hardware due to a configuration error or some other reason, which may involve extensive troubleshooting.

After you implement Exchange Server 2016

Although every Exchange deployment differs slightly, there are key areas worth checking to avoid any surprises.

After the servers pass the Jetstress test, start your deployment of Exchange Server 2016. What is right for your organization will vary, but in most circumstances, admins implement a database availability group (DAG) with the Exchange mailbox servers along with the appropriate load balancing and, where appropriate, backup software.

What Exchange admins need is a checklist to verify all the settings before you go live. Although every Exchange deployment differs slightly, there are key areas worth checking to avoid any surprises.

Set up Exchange for basic tests

You should test the Exchange 2016 infrastructure at a high level to verify its status. At this point, it’s unlikely you have migrated the client access role across, so you might need to reconfigure the local host files on your test clients to run these trials.

Area

Test activity

User accounts

Create test mailboxes in each data center on Exchange 2016.

User accounts

Create test mailboxes in each DAG on Exchange 2010.

Client

Configure host file records to simulate connectivity to Exchange 2016 load balancers.

OWA 2016

Test Outlook on the web/Outlook Web App (OWA) login functionality for an Exchange 2016 user in each data center.

OWA 2016

Test reading, opening and replying to emails for an Exchange 2016 user in each data center.

OWA 2016

Test creating, updating and modifying a calendar item for an Exchange 2016 user in each data center.

OWA 2016

Test creating, updating and modifying a contact item for an Exchange 2016 user in each data center.

OWA 2016

Test disabling user access to OWA for security purposes.

Email

Test mail flow between Exchange 2016 users in each data center.

Email

Test mail flow to an Exchange 2016 user in each data center from Exchange 2010 in each data center.

Email

Test mail flow to an Exchange 2016 user in each data center from an external source.

Email

Test mail flow from an Exchange 2016 user to an Exchange 2010 user.

Email

Test external out-of-office settings of an Exchange 2016 user from an external source.

Federation

Test availability of an Exchange 2016 mailbox from an external partner.

Federation

Test availability of an external partner’s mailbox from Exchange 2016.

Exchange general

Test mailbox move functionality from Exchange 2010 to Exchange 2016 in each DAG.

Exchange general

Test mailbox move functionality from Exchange 2016 to Exchange 2010 in each DAG.

Testing each database availability group

After you complete these basic checks, you should run tests with the following PowerShell cmdlets against each DAG to check mailbox services.

Area

Test activity

Service health

Use Test-ServiceHealth to verify services are running.

Service health

Use Get-HealthReport to check if each server is healthy.

Mail flow

Use Test-Mailflow to test the mail flow against each server.

Mail flow

Use Test-SmtpConnectivity to test connectivity to each receive connector.

Mailbox

Use Test-ReplicationHealth to validate the DAG continuous replication status.

Mailbox

Use Get-MailboxDatabaseCopyStatus to view the health of the database copies within the DAG.

Mailbox

Use Test-MapiConnectivity to verify MAPI and LDAP work with a user’s login.

Mailbox

Use Test-AssistantHealth to check that the Mailbox Assistants service is running and healthy against each server.

Client access

Use the Microsoft Connectivity Analyzer to execute the Outlook connectivity tests.

Client access

Use Test-WebServicesConnectivity to test client connectivity to Exchange Web Services virtual directories against each server.

Client access

Use Test-ActiveSyncConnectivity to simulate a full synchronization with a mobile device.

Client access

Use a browser to log on to the Exchange Admin Center to verify functionality of all Exchange 2016 servers.

Client access

Use Test-MRSHealth to verify that the Mailbox Replication service is running and that it responds to a remote procedure call ping check.

High availability

Validate that the passive copy of databases in the same data center comes online automatically after a failure of a database.

High availability

Validate that the services that are running in the secondary data center continue to operate without any interruption after failing all the servers within the DAG in the primary data center.

High availability

Manually remove a disk from a passive database to test if auto reseed works as expected. Reverse the process to return the disks to the original state.

High availability

Perform a cold start of the DAG to validate that the DAG will start correctly if a major outage occurs.

Load balancer

Disable all load balanced servers for each server in turn within the same data center. Validate client access and mail flow for mailboxes hosted on failed servers.

Load balancer

Disable all load balanced services within the first data center. Validate client access and mail flow for mailboxes hosted on the failed data center.

Load balancer

Disable all load balanced services within the secondary data center. Validate client access and mail flow for mailboxes hosted on the failed data center.

Backups

Use Get-MailboxDatabase to validate the right setting for circular logging: disabled if using backup software or enabled if there is no backup software installed.

Backups

Perform a full backup of each mailbox database.

Backups

Perform an incremental backup of each mailbox database.

Backups

Restore a full database to a temporary location and recover a mailbox.

Backups

Restore a full database to the original location and mount it.

Unified messaging

Test leaving a voicemail to an Exchange 2016 mailbox.

Unified messaging

Test receiving a voicemail in an Exchange 2016 mailbox via the Outlook client.

Unified messaging

Test receiving a voicemail in an Exchange 2016 mailbox via Play on Phone.

Unified messaging

Test access to Outlook Voice Access in Exchange 2016.

Unified messaging

Test instant messaging sign-in to Exchange 2016.

Unified messaging

Test Skype for Business meeting scheduling in OWA.

Check client connectivity

In the final stage of Exchange Server testing, you should examine client connectivity. If the Exchange system passes all the previous tests, then basic connectivity is most likely fine. It’s important to run a full set of tests using the builds of the clients the end users will use.

Your checklist might vary from the one below to include the different Outlook versions and mobile devices to test.

Area

Test Activity

Outlook 2016

Test before/after migration experience.

Outlook 2016

Test Autodiscover in Exchange 2016.

Outlook 2016

Test cached mode access via Exchange 2016 in each data center.

Outlook 2016

Test offline address book download functionality via Exchange 2016 in each data center.

Outlook 2016

Test Exchange Web Services — free, busy, out of office — functionality via Exchange 2016 in each data center.

Outlook 2016

Test Outlook Anywhere functionality via Exchange 2016 in each data center.

Outlook 2016

Test mail send/receive/synchronization via Exchange 2016 in each data center.

Outlook 2016

Test open additional mailbox functionality via Exchange 2016 in each data center.

Outlook 2016

Test open additional on-premises mailbox functionality via Exchange 2016 in each data center.

Mobile device

Test Autodiscover functionality in Exchange 2016 in each DAG.

Mobile device

Test ActiveSync synchronization in Exchange 2016 in each DAG.

Skype for Business client

Test Exchange access after migration to Exchange 2016.

As you test, record the results for reference purposes. For example, you may wish to:

  • collect screenshots or screen recordings as you test;
  • work with a colleague to help oversee the testing process and sign off on the checklist; and
  • there may be other areas to resolve, so add a column to add notes for any remediation actions before retesting the environment.

Further investigation required for a full test

This list is just a starting point. Consider if you need to refine the checklist to fit your specific needs. Perhaps you need to add tests to cover aspects like Office 365 hybrid or public folder migrations.

This article should be a useful start for administrations about to embark on an Exchange Server 2016 deployment.

What is the future for Microsoft Exchange Unified Messaging?


With the influx of cloud services integrating with on-premises products, many Exchange administrators wonder what…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

changes they’ll see in the next version of Exchange Server, particularly with unified messaging. To predict the evolution of Microsoft Exchange Unified Messaging, we can glean some clues from how the company implemented and changed this feature in previous Exchange versions.

The company’s strategy the last several years has been a gradual shift from on-premises products to subscription-based cloud services. In some instances, Microsoft has integrated a cloud service with an on-premises product to sway enterprises toward its cloud offerings. For example, despite its name, Office 365 Advanced Threat Protection filters email for on-premises Exchange Server.

Microsoft plans to release Exchange Server 2019 in the second half of 2018. As of this article’s publication, the preview is under development. But we should expect Microsoft to add — and remove — some cloud-based features in this next version.

History of Microsoft Exchange Unified Messaging

Microsoft debuted unified messaging in Exchange 2000 with its instant messaging (IM) feature on the company’s Real-Time Collaboration server platform.

When Microsoft released Exchange 2003, the company included its Live Communications Server (LCS) that split off some functionality from the Real-Time Collaboration stack. The LCS controlled IM, video and voice functions on the platform.

In July 2006, Microsoft and Nortel formed the Innovative Communications Alliance to share technology, improve their unified communications platforms and integrate Nortel telecommunications hardware with Microsoft software.

Microsoft then released Office Communications Server (OCS) 2007, which brought the company closer to a complete telephone system. OCS 2007 did not integrate into the public switched telephone network, but it did allow voice over internet protocol (VoIP) calls. With VoIP, Microsoft needed voicemail, and the company incorporated this feature in Exchange Server 2007.

With all these changes, many IT pros have difficulty understanding what telephone services OCS, Lync and Skype for Business provide and which ones Exchange Server handles. Exchange Server answers the calls, but it does not provide any telephone services up to that point. Microsoft Exchange unified messaging can pick up the line and provide services after a call connects.

Microsoft Exchange unified messaging in the cloud

Following the launch of Office 365 in 2011, Microsoft focused on the development of its cloud products.

Lync Server 2010 was the private branch exchange (PBX) on the market when Microsoft released Office 365, but its features were limited compared to a cloud PBX offering. Lync Online — now called Skype for Business — only controlled IM and presence services for users who had mailboxes migrated into the service.

To predict the evolution of Microsoft Exchange Unified Messaging, we can glean some clues from how the company implemented and changed this feature in previous Exchange versions.

Exchange Online, the hosted email service from Microsoft, provided a full unified messaging service from the cloud with all the same features of the on-premises version of Exchange. Companies could tie on-premises telephone systems to Exchange Online to use the cloud service for voicemail.

Microsoft now offers its Azure Voicemail cloud service, which replaces the unified messaging functionality of Exchange Online for customers who use the Skype for Business Cloud PBX.

Unified messaging in Exchange Server 2013 and 2016

With Exchange 2003, Microsoft introduced the concept of a front-end Exchange server. This wasn’t a complete deployment of separate Exchange bits for separate Exchange functions. Exchange 2007 and 2010 both featured differentiated server roles, such as the Mailbox, Hub Transport and Client Access roles.

With the release of Exchange 2013, Microsoft pared back those roles to more of a front-end/back-end configuration. Exchange 2016 features one Exchange server role other than the edge transport role, which is designed to be deployed in a demilitarized zone.

The Microsoft Exchange unified messaging role received very little development in Exchange Server 2013 and 2016. The only change for unified messaging with those releases is that Exchange 2016 no longer supports the deployment of separate roles. These trends will likely continue with the release of Exchange 2019 with a single deployment option for all the roles on the same physical server.

The future of Microsoft Exchange Unified Messaging

At the moment, details on Exchange 2019 are sparse. Microsoft plans to release a public preview of the on-premises product in mid-2018.

Based on recent trends from Microsoft, we can expect that Exchange Server 2019’s minimum requirements will include supported versions of the Windows Server operating system and Active Directory.

Microsoft will continue to steer organizations to its online services over on-premises software. Companies that want the latest features and functionality will need to consider whether a move to Exchange Online is a better fit.

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.

What is Windows Admin Center (formerly Project Honolulu)? – Definition from WhatIs.com

Windows Admin Center, formerly Microsoft Project Honolulu, is a browser-based utility that manages Windows Server and client operating systems, hyper-converged clusters, and failover clusters.

Windows Admin Center is a free administrative tool from Microsoft that consolidates a number of management applications on top of a browser-based HTML5 front end. Microsoft only supports Windows Admin Center on Google Chrome and Microsoft Edge browsers. While Windows Admin Center runs in a browser, it does not require internet access to operate.

From a single interface, the Windows Admin Center dashboard shows performance of a cluster and resource utilization on a server. Windows Admin Center also handles a number of administrative tasks, from registry edits to replicating virtual machines (VMs) to Azure. Windows Admin Center also manages Windows Server VMs in other cloud platforms, such as Amazon Web Services (AWS).

Microsoft developed Windows Admin Center after feedback from server administrators indicated other management tools, such as PowerShell, lacked features geared toward analysis, such as data visualization, or simple configuration adjustments.

Microsoft released a precursor to Windows Admin Center in early 2016 called Server Management Tools (SMT). SMT was a remote server management tool that required access to the Azure cloud platform to perform. While customers liked SMT, they found the need for Azure to be a detriment to local server management. Microsoft retooled SMT as an on-premises utility and unveiled it as Project Honolulu in September 2017 as a technical preview.

On April 12, 2018, Microsoft renamed Project Honolulu to Windows Admin Center and made it generally available, offering support for it in production settings.

Tasks Windows Admin Center performs

Microsoft promotes Windows Admin Center as a complementary tool to Remote Server Administration Tools (RSAT), Microsoft Management Console (MMC), System Center and the Operations Management Suite. Windows Admin Center cannot yet manage certain roles, such as Active Directory, Dynamic Host Configuration Protocol, domain name system and Internet Information Services.

In addition to its built-in applications, Windows Admin Center features a PowerShell console for administrators who need to run scripts.

Windows Admin Center
Windows Admin Center is a remote server management tool from Microsoft that consolidates a number of administrative applications in a web browser interface.

Windows Admin Center can handle a series of administrative tasks, including certificate management, firewall administration, local user and group setups, network setting monitoring, registry edits, Windows services management, roles and features control, virtual switch and Hyper-V VM administration, process management, storage handling, and Windows Update management. Windows Admin Center also runs Remote Desktop, Event Viewer and File Explorer.

The Windows Admin Center Server Manager dashboard generates resource utilization graphs for systems, such as CPU usage and storage capacity.

What operating systems can Windows Admin Center manage?

Windows Admin Center manages Windows Server 2012 and newer in the Long-Term Servicing Channel and all Windows Server releases in the Semi-Annual Channel.

Microsoft’s documentation states it will optimize Windows Admin Center for Windows Server 2019, due for general availability sometime in the second half of 2018. According to the company, Windows Admin Center will assist administrators in hyper-converged and hybrid cloud deployments.

[embedded content]

How to deploy and set up Windows Admin Center

On the Windows client side, Windows Admin Center supports Windows 10 version 1709 and newer

While extended support for Windows Server 2008 and 2008 R2 does not end until January 2020, Windows Admin Center cannot manage those systems because it requires PowerShell functionality developed after Microsoft released those versions.

Setup and requirements

Administrators can install Windows Admin Center on supported Windows Server versions and Windows 10 in gateway mode, and on a Windows 10 machine in desktop mode.

As a gateway server, Windows Admin Center runs as a network service that enables multiple users to manage Windows Server on non-Windows devices, such as a Linux-based workstation. IT workers who need to manage systems remotely can use the gateway server over the internet.

Windows Admin Center uses PowerShell remoting or Windows Management Instrumentation over the Windows Remote Management (WinRM) service to manage systems.

Windows Admin Center doesn’t use agents, but it does require Windows Management Framework version 5.1 or higher on target systems.

Role-based access control provides restrictions

Windows Admin Center uses role-based access control to restrict features from certain users. The application currently supports three roles.

  • Administrators: This role gives the user access to most Windows Admin Center tools without the need for PowerShell or Remote Desktop access.
  • Hyper-V administrators: Lets the user only adjust Hyper-V VMs and switches.
  • Readers: These users can see server settings and information, but they cannot make changes.

Important features

The Server Manager section contains many of the MMC applications administrators need to manage VMs or troubleshoot a problem in a physical server, such as Event Viewer and Device Manager.

The Failover Cluster Manager (MSFCM) enables administrators to run certain tasks on Hyper-V failover clusters, such as live migrations.

The Hyper-Converged Cluster Manager features tools to manage Hyper-V hosts that run a Storage Spaces Direct (S2D) cluster. Before Windows Admin Center, the only way to configure S2D was through System Center or PowerShell.

The Extension Manager section features plug-ins from Microsoft that extend the functionality of Windows Admin Center.

The company plans to release a software development kit (SDK) for third parties to build their own plug-ins. With an SDK, vendors can develop an extension for Windows Admin Center that offers monitoring insights, such as indicating when a drive is about to fail in a hyper-converged appliance.

What is Windows event log? – Definition from WhatIs.com

The Windows event log is a detailed record of system, security and application notifications stored by the Windows operating system that is used by administrators to diagnose system problems and predict future issues.

Applications and the operating system (OS) use these event logs to record important hardware and software actions that the administrator can use to troubleshoot issues with the operating system. The Windows operating system tracks specific events in its log files, such as application installations, security management, system setup operations on initial startup, and problems or errors.

The elements of a Windows event log

Each event in a log entry contains the following information:

Date: The date the event occurred.

Time: The time the event occurred.

User: The username of the user logged onto the machine when the event occurred.

Computer: The name of the computer.

Event ID: A Windows identification number that specifies the event type.

Source: The program or component that caused the event.

Type: The type of event, including information, warning, error, security success audit or security failure audit.

For example, an information event might appear as:

Information        5/16/2018 8:41:15 AM    Service Control Manager              7036       None

A warning event might look like:

Warning               5/11/2018 10:29:47 AM  Kernel-Event Tracing      1              Logging

By comparison, an error event might appear as:

Error                      5/16/2018 8:41:15 AM    Service Control Manager              7001       None

A critical event might resemble:

Critical   5/11/2018 8:55:02 AM    Kernel-Power    41           (63)

The type of information stored in Windows event logs

The Windows operating system records events in five areas: application, security, setup, system and forwarded events. Windows stores event logs in the C:WINDOWSsystem32config folder.

Application events relate to incidents with the software installed on the local computer. If an application such as Microsoft Word crashes, then the Windows event log will create a log entry about the issue, the application name and why it crashed.

[embedded content]

Configure a centralized Windows Server 2016
event log subscription.

Security events store information based on the Windows system’s audit policies, and the typical events stored include login attempts and resource access. For example, the security log stores a record when the computer attempts to verify account credentials when a user tries to log on to a machine.

Setup events include enterprise-focused events relating to the control of domains, such as the location of logs after a disk configuration.

System events relate to incidents on Windows-specific systems, such as the status of device drivers.

Forwarded events arrive from other machines on the same network when an administrator wants to use a computer that gathers multiple logs.

Using the Event Viewer

Microsoft includes the Event Viewer in its Windows Server and client operating system to view Windows event logs. Users access the Event Viewer by clicking the Start button and entering Event Viewer into the search field. Users can then select and inspect the desired log.

Windows Event Viewer
The Event Viewer application in the Windows operating system

Windows categorizes every event with a severity level. The levels in order of severity are information, warning, error and critical.

Most logs consist of information-based events. Logs with this entry usually mean the event occurred without incident or issue. An example of a system-based information event is Event 42, Kernel-Power which indicates the system is entering sleep mode.

Warning level events are based on particular events, such as a lack of storage space. Warning messages can bring attention to potential issues that might not require immediate action. Event 51, Disk is an example of a system-based warning related to a paging error on the machine’s drive.

An error level indicates a device may have failed to load or operate expectedly. Event 5719, NETLOGON is an example of a system error when a computer cannot configure a secure session with a domain controller.

Critical level events indicate the most severe problems. Event ID 41, Kernel-Power is an example of a critical system event when a machine reboots without a clean shutdown.

Other tools to view Windows event logs

Microsoft also provides the wevtutil command-line utility in the System32 folder that retrieves event logs, runs queries, exports logs, archives logs and clear logs.

Third-party utilities that also work with Windows event logs include SolarWinds Log & Event Manager, which provides real-time event correlation and remediation; file integrity monitoring; USB device monitoring; and threat detection. Log & Event Manager automatically collects logs from servers, applications and network devices.

ManageEngine EventLog Analyzer builds custom reports from log data and sends real-time text message and email alerts based on specific events.

Using PowerShell to query events

Microsoft builds Windows event logs in extensible markup language (XML) format with an EVTX extension. XML provides more granular information and a consistent format for structured data.

Administrators can build complicated XML queries with the Get-WinEvent PowerShell cmdlet to add or exclude events from a query.

Mind the feature gaps in the PowerShell open source project


Even though the current version of the PowerShell open source project lacks much of the functionality of Windows…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

PowerShell, administrators can close those gaps with a few adjustments.

Microsoft released the first version of its cross-platform, open source version of PowerShell in January 2018. This version, officially called PowerShell Core 6.0, is not a straight swap for Windows PowerShell 5.1, even though it’s forked from the 5.1 code base and adapted for use on Windows, many Linux distributions and macOS.

The key difference is Windows PowerShell 5.1 and earlier versions run on the full .NET Framework for Windows, whereas the PowerShell open source project relies on .NET Core, which does not have access to the same .NET classes.

Differences between Windows PowerShell and PowerShell Core

Many PowerShell 5.1 features aren’t available in PowerShell Core 6, including the Integrated Scripting Environment (ISE), PowerShell workflows, Windows Management Instrumentation (WMI) cmdlets, event log cmdlets, performance counter cmdlets and the Pester module.

The large collection of PowerShell modules that work with Windows PowerShell are not available in PowerShell Core 6. Any binary PowerShell module compiled against the full .NET Framework won’t work in the PowerShell open source project, including the Active Directory module and the Exchange module.

The large collection of PowerShell modules that work with Windows PowerShell are not available in PowerShell Core 6.

PowerShell Core 6 brings some useful features. The first is the ability to administer Linux and macOS systems using PowerShell. The depth of cmdlets for Windows systems is not available on the non-Windows platforms, but the PowerShell community might fill those holes through the PowerShell Gallery.

Secondly, PowerShell 6 introduced remoting over Secure Socket Shell (SSH) as opposed to remoting with the Web Services-Management protocol. This enables remoting to and from Linux systems and provides an easy way to remote to and from non-domain Windows machines.

Installing and configuring SSH on Windows is getting easier, and the inclusion of SSH as an optional feature in the latest Windows 10 and Windows Server previews will hopefully lead to a simpler remoting installation and configuration process.

How to surmount functionality obstacles

You can overcome some of the missing features in PowerShell Core 6, starting with the ISE. Use Visual Studio (VS) Code instead of ISE for developing scripts. VS Code is an open source editor that runs on Windows, Linux and macOS. VS Code uses Windows PowerShell or PowerShell Core in the integrated terminal if both are installed.

PowerShell workflows, which allow functions to run on several machines at the same time, are never going to be a part of PowerShell Core because this feature is difficult to implement. Instead, you can use PowerShell runspaces to provide parallel processing. While they aren’t easy to code, a proposal exists to create cmdlets for managing runspaces.

An example of a simple PowerShell workflow is:

workflow test1 {

  parallel {

    Get-CimInstance -ClassName Win32_OperatingSystem

    Get-CimInstance -ClassName Win32_ComputerSystem

  }

}

test1

Figure 1 shows the results of this script.

PowerShell workflow
Figure 1. Running a workflow on Windows PowerShell 5.1 in the PowerShell Integrated Scripting Environment

Emulating a PowerShell workflow with runspaces results in the following code:

## create runspace pool with min =1 and max = 5 runspaces
$rp = [runspacefactory]::CreateRunspacePool(1,5)
## create powershell and link to runspace pool
$ps = [powershell]::Create()
$ps.RunspacePool = $rp
$rp.Open()
$cmds = New-Object -TypeName System.Collections.ArrayList

1..2 | ForEach-Object {
    $psa = [powershell]::Create()
    $psa.RunspacePool = $rp
    if ($_ -eq 1) {
       [void]$psa.AddScript({
         Get-CimInstance -ClassName Win32_OperatingSystem
       })
    } else {
        [void]$psa.AddScript({
            Get-CimInstance -ClassName Win32_ComputerSystem
          })
    }
$handle = $psa.BeginInvoke()
     $temp = ” | Select-Object PowerShell, Handle
     $temp.PowerShell = $psa
     $temp.Handle = $handle
     [void]$cmds.Add($temp)
}

 

## view progress
$cmds | Select-Object -ExpandProperty handle
## retrieve data
$cmds | ForEach-Object {$_.PowerShell.EndInvoke($_.Handle)}

## clean up
$cmds | ForEach-Object {$_.PowerShell.Dispose()} 
$rp.Close()
$rp.Dispose()

Figure 2 shows this code running on PowerShell Core 6 in the VS Code editor.

PowerShell runspaces in VS Code
Figure 2. Executing code with runspaces in PowerShell Core 6 in the VS Code editor.

Runspaces code works in Windows PowerShell 5.1 and PowerShell Core 6. Administrators can also simplify runspaces with the PoshRSjob module from the PowerShell Gallery. The latest version works in PowerShell Core 6 on Linux and Windows.

The developers of the PowerShell open source project plan to add missing WMI, event log and performance cmdlets into the Windows Compatibility Pack for .NET Core. WMI cmdlets are effectively deprecated in favor of the Common Information Model (CIM) cmdlets, which are available in PowerShell Core 6 and Windows PowerShell 3 and newer.

[embedded content]

Showing PowerShell Core’s cross-platform
capabilities

Event log cmdlets only work with the traditional event logs. If you need to work with the new style application and service logs, you have to use the Get-WinEvent cmdlet, which also works with the old-style logs.

The following command uses the older Get-EventLog cmdlet:

Get-EventLog -LogName System -Newest 5

It’s easy enough to switch to the Get-WinEvent cmdlet to get similar results:

Get-WinEvent -LogName System -MaxEvents 5

The Get-WinEvent cmdlet can’t clear, limit, write to or create/remove classic event logs, but you can configure event logs using the properties and methods on the objects returned.

Get-WinEvent -ListLog *

In Windows PowerShell 5.1, you can run the following command to access performance counters:

Get-Counter -Counter ‘Processor(*)% Processor Time’

This generates the following output:

Timestamp                 CounterSamples

———                 ————–

31/03/2018 15:18:34       \w510w10processor(0)% processor time :

                          1.56703775955929

 

                          \w510w10processor(1)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(2)% processor time :

                          1.56703775955929

 

                          \w510w10processor(3)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(4)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(5)% processor time :

                          4.69189370370026

 

                          \w510w10processor(6)% processor time :

                          3.12946573162978

 

                          \w510w10processor(7)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(_total)% processor time :

                          1.37173676293523

The alternative way to return performance counter data is with CIM classes, which work in Windows PowerShell 5.1 and PowerShell Core 6:

Get-CimInstance -ClassName Win32_PerfFormattedData_PerfOS_Processor | select Name, PercentProcessorTime

 

Name   PercentProcessorTime

—-   ——————–

_Total                   13

0                        50

1                         0

2                         0

3                        43

4                         6

5                         6

6                         0

7                         0

PowerShell Core 6 can access many of the older PowerShell modules, such as the networking or storage modules, available on Windows 8 or Windows Server 2012. To do this, add the following entry in your profile to the Windows PowerShell 5.1 modules folder:

$env:PSModulePath =  $env:PSModulePath + ;C:WindowsSystem32WindowsPowerShellv1.0Modules’

Another way to do this is to import the module directly.

Import-Module C:WindowsSystem32WindowsPowerShellv1.0ModulesNetAdapterNetAdapter.psd1

Get-NetAdapter

 

Name                      ifIndex Status       MacAddress             LinkSpeed

—-                      ——- ——       ———-             ———

vEthernet (nat)           16 Up           00-15-5D-82-CF-92        10 Gbps

vEthernet (DockerNAT)     20 Up           00-15-5D-36-C9-37        10 Gbps

vEthernet (Default Swi…  9 Up           2E-15-00-2B-41-72        10 Gbps

vEthernet (Wireless)      22 Up           F0-7B-CB-A4-30-9C     144.5 Mbps

vEthernet (LAN)           12 Up           00-15-5D-36-C9-11        10 Gbps

Network Bridge            24 Up           F0-7B-CB-A4-30-9C     144.5 Mbps

LAN                       14 Disconnected F0-DE-F1-00-3F-67          0 bps

Wireless                   8 Up           F0-7B-CB-A4-30-9C     144.5 Mbps

In the case of binary modules, such as Active Directory, you’ll have to revert to scripting when using the PowerShell open source version. If you want to administer Linux machines using PowerShell Core 6, you’ll have to do a lot of scripting or wait for the community to create the functionality.