Category Archives: Expert advice on Windows based systems and hardware

Expert advice on Windows based systems and hardware

Manage Hyper-V containers and VMs with these best practices

Containers and VMs should be treated as the separate instance types they are, but there are specific management strategies that work for both that admins should incorporate.


Containers and VMs are best suited to different workload types, so it makes sense that IT administrators would…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

use both in their virtual environments, but that adds another layer of complexity to consider.

One of the most notable features introduced in Windows Server 2016 was support for containers. At the time, it seemed that the world was rapidly transitioning away from VMs in favor of containers, so Microsoft had little choice but to add container support to its flagship OS.

Today, organizations use both containers and VMs. But for admins that use a mixture, what’s the best way to manage Hyper-V containers and VMs?

To understand the management challenges of supporting both containers and VMs, admins need to understand a bit about how Windows Server 2016 works. From a VM standpoint, Windows Server 2016 Hyper-V isn’t that different from the version of Hyper-V included with Windows Server 2012 R2. Microsoft introduced a few new features, as with every new release, but the tools and techniques used to create and manage VMs were largely unchanged.

In addition to being able to host VMs, Windows Server 2016 includes native support for two different types of containers: Windows Server containers and Hyper-V containers. Windows Server containers and the container host share the same kernel. Hyper-V containers differ from Windows Server containers in that Hyper-V containers run inside a special-purpose VM. This enables kernel-level isolation between containers and the container host.

Hyper-V management

When Microsoft created Hyper-V containers, it faced something of a quandary with regard to the management interface.

The primary tool for managing Hyper-V VMs is Hyper-V Manager — although PowerShell and System Center Virtual Machine Manager (SCVMM) are also viable management tools. This has been the case ever since the days of Windows Server 2008. Conversely, admins in the open source world used containers long before they ever showed up in Windows, and the Docker command-line interface has become a standard for container management.

Ultimately, Microsoft chose to support Hyper-V Manager as a tool for managing Hyper-V hosts and Hyper-V VMs, but not containers. Likewise, Microsoft chose to support the use of Docker commands for container management.

Management best practices

Although Hyper-V containers and VMs both use the Hyper-V virtualization engine, admins should treat containers and VMs as two completely different types of resources. While it’s possible to manage Hyper-V containers and VMs through PowerShell, most Hyper-V admins seem to prefer using a GUI-based management tool for managing Hyper-V VMs. Native GUI tools, such as Hyper-V Manager and SCVMM, don’t support container management.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host.

Admins who wish to manage their containers through a GUI should consider using one of the many interfaces that are available for Docker. Kitematic is probably the best-known of these interfaces, but there are third-party GUI interfaces for containers that arguably provide a better overall experience.

For example, Datadog offers a dashboard for monitoring Docker containers. Another particularly nice GUI interface for Docker containers is DockStation.

Those who prefer an open source platform should check out the Docker Monitoring Project. This monitoring platform is based on the Kubernetes dashboard, but it has been adapted to work directly with Docker.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host. Although Microsoft doesn’t provide any native GUI tools for managing VMs and containers side by side, admins can use SCVMM to manage all manner of Hyper-V hosts, regardless of whether those servers are hosting Hyper-V VMs or Hyper-V containers.

Admins who have never worked with containers before should spend some time experimenting with containers in a lab environment before attempting to deploy them in production. Although containers are based on Hyper-V, creating and managing containers is nothing like setting up and running Hyper-V VMs. A great way to get started is to install containers on Windows 10.

Dig Deeper on Microsoft Hyper-V management

What are the Microsoft IRM requirements for Exchange 2016?



Q

Get started
Bring yourself up to speed with our introductory content.

Before a company can take advantage of information rights management on the Exchange 2016 platform, administrators must ensure the server setup meets the technical requirements.


While Exchange Server 2016 has information rights management turned on by default, the messaging platform must…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

meet several prerequisites before it can implement Microsoft IRM.

The information rights management (IRM) functionality in Exchange 2016 gives users a way to restrict the actions a recipient can perform with sensitive email and documents. Microsoft IRM offers a way to limit the potential for misuse of confidential material.

Microsoft IRM requirements for Exchange 2016

Microsoft IRM requires an Active Directory Rights Management Services (AD RMS) cluster. The cluster can consist of a single AD RMS server unless the organization needs more clusters for high availability and load balancing.

Exchange relies on the service connection point registered with AD to detect and interact with the cluster. The AD RMS cluster needs read and execute permissions assigned to the Exchange servers. Finally, the administrator must add a federation mailbox to the Super Users group on the AD RMS server to provide features such as transport decryption, journal decryption, IRM in Outlook on the web and IRM decryption for Exchange search.

Microsoft IRM supports email and typical Microsoft Office file formats such as Word and Excel, but it can be extended to other file formats.

Next, a Microsoft IRM system needs an Exchange 2016 deployment, although versions as old as Exchange 2010 may also work. Exchange and AD RMS require separate servers.

Finally, the IRM deployment needs an email client. Outlook is the most common client, and Outlook versions as old as Outlook 2007 can support the AD RMS templates that employees use to apply IRM permissions to messages and attachments. Users with mobile devices on ActiveSync version 14.1 or later can view, reply to, forward and create messages with Microsoft IRM protection.

Extend IRM to protect less common formats

Microsoft IRM supports email and typical Microsoft Office file formats such as Word and Excel, but it can be extended to other file formats through custom protectors that convert other file types into IRM formats. Administrators must register each new Microsoft IRM protector. Administrators can stipulate which file types the protector can convert.

Dig Deeper on Microsoft messaging and collaboration services

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever’s puzzling you.

How do you use Outlook IRM to protect email content?



Q

Get started
Bring yourself up to speed with our introductory content.

The information rights management feature in Exchange prevents unauthorized parties from viewing sensitive content. Admins can tailor the settings to fit the company’s needs.


While Microsoft enables information rights management by default in Exchange 2016, using this feature requires…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

administrative intervention before users can use Outlook IRM to protect email.

Exchange information rights management (IRM) enables users to control access to documents and email. Users can manually apply IRM to messages with Active Directory Rights Management Services (AD RMS) templates in the Outlook client or with Outlook on the web. After an administrator enables IRM in Outlook for the web, users can select an IRM template in the email creation dialog to protect outgoing messages and attachments or receive incoming content that is already protected by IRM.

IRM works with ActiveSync to enable users to create, view, forward and reply to IRM-protected messages across ActiveSync devices. Any ActiveSync device with IRM enabled — even non-Windows devices — can use IRM without the need to configure AD RMS permissions or connect to an IRM-enabled computer.

Outlook IRM features depend on the user

Although users can manually protect individual messages with Outlook IRM functionality, administrators can set up IRM automatically using protection rules. Administrators deploy these rules to Outlook clients to apply them automatically and meet business governance and compliance needs each time a user creates a new message. Users who forget to use Outlook IRM to protect important messages and attachments remain protected by these automatic rules.

Although users can manually protect individual messages with Outlook IRM functionality, administrators can set up IRM automatically using protection rules.

To protect every message and attachment in an Exchange mailbox server automatically, administrators can create transport rules, also called mail flow rules, that will search messages for specified conditions and apply IRM accordingly. For example, if a user applies a do not forward IRM template to messages, only the intended recipient can read the message. IRM prevents the recipient from forwarding, copying or printing the content.

Exchange detects pre-existing IRM protection in messages and will not apply further protection rules if an Outlook user chooses to add protection to a message.

Be aware of IRM shortcomings

Using IRM does not guarantee absolute protection. Legitimate recipients can use screen capture tools to save content.

[embedded content]

How to set permissions for email

Certain designated internal staff can decrypt and access message content. Organizations rely on internal auditors or investigators who need to search IRM-protected content to adhere to regulatory compliance needs, litigation requirements, regulatory audits or internal investigations.

Administrators can use transport agents for investigative decryption on Exchange servers, copy content to journaling reports and use In-Place eDiscovery searches to check for legal discovery evidence. However, these capabilities require the addition of a federation mailbox to the Super Users group on the AD RMS server.

Dig Deeper on Microsoft messaging and collaboration services

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever’s puzzling you.

Authenticating email in Exchange for brand protection

Get started
Bring yourself up to speed with our introductory content.

With help from the combined use of the SPF, DKIM and DMARC technologies, Exchange administrators can curb email spoofing to protect users and the company brand.


It only takes one user clicking on a phishing email to disrupt a company — and to damage its reputation. But administrators…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

can utilize technologies for authenticating emails in Exchange to stop these malicious attacks.

An enterprise that wants to prevent a security breach can implement Exchange email authentication protocols in tandem with the platform’s encryption features to protect the company’s reputation.

Email brand protection keeps malicious actors from using your company name for some disreputable scheme. Brand abuse occurs on a regular basis. Let’s look at some examples.

  • Have you ever received an email from your credit card company, but the language or wording wasn’t quite right? You might look closely at the sender address and notice it didn’t come from your credit card company.
  • Has your CEO ever received an email requesting money from what looks like your accounting department? Again, the language or format of the message probably made it very clear this wasn’t an internal message, but the fact that some external party sent it and your CEO received it is a problem.
  • Has a user clicked a link in an email that took them to a website where they filled in personal information only to find out the site was fake?

These are only a few examples of how a person outside of your organization can send an email that abuses your company brand. To thwart these attempts, your technical teams can employ technologies for authenticating email — specifically SPF, DKIM and DMARC.

Get started with SPF

We’re not talking about the sun protection found in sunscreen; SPF stands for Sender Policy Framework. SPF is a domain name system (DNS) TXT record entry that can be added to your external DNS. SPF is a great step toward brand protection because it can detect address spoofing.

SPF is a great step toward brand protection because it can detect address spoofing.

Your SPF TXT record should include an entry for your organization and the IP address and DNS name of any third party allowed to send email with your domain name. If your SPF TXT record is accurate, then this is one step toward allowing legitimate email to flow and blocking the messages that could harm your brand.

However, there are some limitations with SPF TXT records in Exchange. You can only have up to 10 DNS-based entries, so it’s helpful to see what other brand protection options are available, as SPF records will reach their limit quickly.

How DKIM signatures stop spoofing

DomainKeys Identified Mail (DKIM) signatures place a domain-based signature in the message header that identifies the message as internal to prevent email spoofing attempts. A DKIM signature offers additional brand protection with the proper setup.

[embedded content]

How to set up DMARC in Office 365

To set up DKIM for authenticating email, your technical team needs to enable DKIM signatures in the external email gateway. From there, the system generates a DKIM signature that you should set up in your external DNS. This setup in both areas proves that the DKIM signature in your message header belongs to your organization.

Third-party companies that you allow to send email as your organization can also use DKIM signatures. The company just needs to generate a DKIM signature for the messages that they will send under your domain, then your administrators need to add it to the external DNS. Not all third-party cloud providers offer this, so be sure to ask about it.

Last, but not least, we have DMARC

Domain-based Message Authentication, Reporting and Conformance (DMARC) is the key ingredient that enables SPF and DKIM to work at their highest level.

When a DMARC external DNS record is in place, it gives the organization a way to report on and understand the level of brand abuse. This reporting shows both valid messages and the brand abuse messages that would not be visible otherwise.

With DMARC enabled, you get the flexibility to use either DKIM or SPF, meaning if a message passes SPF or DKIM, then it will pass. With the limitations of SPF records, the ability to use DKIM instead is a great option.

Be aware of potential issues with authenticating email measures

The combined use of SPF, DKIM and DMARC offers the highest level of brand protection.

Authenticating email can lead to some harmful side effects, so be sure to test your setup. You can configure DMARC and SPF for detection only to determine any issues that might occur when they are fully implemented. I strongly encourage you to use a third-party reporting tool to clarify why certain messages are stopped if you use SPF and DMARC.

A good tool can help you add valid messages to your SPF record and DKIM signatures prior to enforcing DMARC. Take a measured testing approach to prevent business user impact.

Dig Deeper on Microsoft messaging and collaboration services

What are the Windows Defender management tools?



Q

Get started
Bring yourself up to speed with our introductory content.

If you’re using Windows Defender AV to protect your company, it’s imperative to configure the malware protection properly. This tip lays out the management options for admins.


Malware never sleeps, posing a significant problem for enterprises that can’t abide any downtime. As the last line…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

of defense, administrators need to ensure they have a strong grasp on Windows Defender management.

Malware presents a serious risk for data loss, data theft, and possible breaches in regulatory compliance and business governance. Windows Defender Antivirus (AV) protects endpoints and servers in Windows-based organizations from these attacks. Proper Windows Defender management requires administrators to have the right tools and procedures to secure the company’s systems.

Tools for Windows Defender management

IT administrators can use System Center Configuration Manager (SCCM) to deploy Windows Defender AV using the endpoint protection point site system role, then enable endpoint protection using custom client options. This provides access to both default and customized antimalware policies and endpoint system management. The default Configuration Manager monitoring features and alerts handle reporting.

Administrators can deploy and manage Windows Defender AV using Microsoft Intune. Custom Intune policies from the Intune console handle task management and endpoint monitoring and reporting.

Malware presents a serious risk for data loss, data theft, and possible breaches in regulatory compliance and business governance.

Administrators can use Windows Management Instrumentation (WMI) for Windows Defender AV management via Group Policy, SCCM or individual endpoint installation. Administrators familiar with WMI can use Set in the MSFT_MpPreference class and Update in the MSFT_MpSignature class for management and use the MSFT_MpComputerStatus class for reporting.

Administrators who prefer to use PowerShell can use this tool for Windows Defender management in concert with Group Policy, SCCM or individual endpoint installation for configuration with the Set-MpPreference and Update-MpSignature cmdlets in the Windows Defender module. This module provides a series of Get cmdlets for reporting.

Finally, Group Policy and a domain-joined Active Directory environment can support Windows Defender management. A Group Policy Object (GPO) can be used to deploy and enable Windows Defender AV. GPOs can also manage configuration changes in AV, but reporting is not supported through Group Policy.

Dig Deeper on Windows Server management

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever’s puzzling you.

Taking stock of Windows Server management tools

The right Windows server management tools keep the business running with minimal interruptions. But administrators should be open to change as the company’s needs evolve.

Many organizations run on mix of new and old technologies that complicate the maintenance workload of the IT staff. Administrators need to take stock of their systems and get a complete rundown of all the variables associated with the server operating systems under their purview. While it might not be possible to use one utility to run the entire data center, administrators must assess which tool offers the most value by weighing the capabilities of each.

For these everyday tasks, administrators have a choice of several Windows server management tools that come at no extra cost. Some have been around for years, while others recently emerged from development. The following guide helps IT workers understand why certain tools work well in particular scenarios.

Choose a GUI or CLI tool?

Windows server management tools come in two flavors: graphical user interface (GUI) and command-line interface (CLI).

Many administrators will admit it’s easier to work with a GUI tool because the interface offers point-and-click management without a need to memorize commands. A disadvantage to a GUI tool is the amount of time it takes to execute a command, especially if there are a large number of servers to manage.

Administrators can use both PowerShell versions side by side, which might be necessary for some shops.

Learning how to use and implement a CLI tool can be a slow process because it takes significant effort to learn the language. One other downside is many of these CLI tools were not designed to work together; the administrator must learn how to pipe output from one CLI tool to the next to develop a workflow.

A GUI tool is ideal when there are not many servers to manage, or for one-time or infrequent tasks. A CLI tool is more effective for performing a series of actions on multiple servers.

This tip offers more specifics about the two interfaces used with server management tools.

Windows Admin Center: A new management contender

Windows Admin Center, formerly Project Honolulu, is a GUI tool that combines local and remote server management tools in a single console for a consolidated administrative experience.

Windows Admin Center is one of Microsoft’s newer Windows server management tools that makes it easier to work with nondomain machines, particularly those running Server Core.

Windows Admin Center can only manage Windows systems and lacks the functionality IT workers have come to expect with the Remote Server Administration Tools application.

Administrators interested in using Windows Admin Center as one of their primary Windows server management tools should be aware of potential security issues before implementing it in their data center.

This article provides additional details about the features of this tool.

A venerable offering expands to new platforms

Now more than 10 years old, PowerShell is one of the key Windows server management tools due to its potent ability to manage multiple machines through scripting. No longer just a Windows product, Microsoft converted the automation and configuration management tool into an open source project. Microsoft initially called this new offering PowerShell Core, but now refers to it as just PowerShell. The open source version of PowerShell runs on Linux and macOS platforms. Microsoft supports Windows PowerShell but does not plan to add more features to it.

Administrators can use both PowerShell versions side by side, which might be necessary for some shops. At the moment, Windows PowerShell provides more functionality because certain features have yet to be ported to PowerShell Core.

This link offers more information about the differences between the two versions that administrators need to know.

Azure file shares service lifts admin storage concerns

Get started
Bring yourself up to speed with our introductory content.

Admins who have tired of managing traditional file shares can see if the Azure Files service works as a substitute for traditional data center storage.


Azure file shares have come a long way from an Azure-only storage resource to a full-featured file server in the…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

cloud for multiple operating systems.

Azure Files creates server message block (SMB) shares on the Azure platform. This frees administrators from managing the underlying infrastructure providing those Azure file shares — updating Windows Server versions, patching the operating system, purchasing and maintaining the hardware, and handling most of the disk storage needs.

Admins who have heard of Azure Files might not consider it because its original purpose was to provide platform-as-a-service file sharing for other Azure resources or line of business applications within Azure virtual machines. In the latest release, however, Microsoft expanded Azure Files to support connections from outside Azure data centers.

To use Azure Files, administrators need only configure the SMB shares in the Azure portal and then access a Universal Naming Convention (UNC) path over the internet. Microsoft handles the rest of the file server administration.

Organizations pay a monthly storage fee for the service, which is currently 6 to 10 cents per gigabyte, and a small charge for various operations, such as file reads and directory listings.

Admins who have heard of Azure Files might not consider it because its original purpose was to provide platform-as-a-service file sharing for other Azure resources or line of business applications within Azure virtual machines.

Azure Files works on Windows 8.1, Windows 10, Windows Server 2012 R2, Windows Server 2016, macOS and Linux systems. Older versions of the Windows desktop client and Windows Server cannot connect to the SMB 3.0 file shares.

Creating the Azure file shares

Making a share in Azure is as easy as making an SMB share in Windows Server.

  1. Sign into the Azure portal.
  2. Use an existing storage account or create a new one, then access it.
  3. Click on the + File Share button.
  4. Give the file share a name and a quota — the maximum space available is 5 TB or 5,120 GB.

Mount the Azure file shares

Administrators can mount the Azure file shares, which are stored as blobs in Azure, on Windows, Linux or Mac systems and on virtual machines in the cloud or on premises.

Azure Files service setup
Use the Azure portal to create a new file share in the Azure Files service.

The first step is to give Windows your Azure account credentials. This is most easily done using the cmdkey command-line utility. You’ll need your Azure storage account name, your domain name and the storage account key that the Azure portal generates when you first create a file share. The key will end in two equal signs (==). Run the following command with that information:

cmdkey /add:.file.core.windows.net /user:AZURE /pass:

[embedded content]

Construct a cloud file share with the Azure
Files service.

This will store your Azure credentials so that connections to the Azure file share can persist between machine reboots and user sessions. Next, mount the share in one of two ways:

  • Using File Explorer or Windows Explorer, select Map Network Drive and copy the UNC path shown in the Azure portal. Select a local drive letter to associate with the UNC path similar to how you map a network drive on your local file server.
  • Using the command prompt and the net use command, execute the following:

net use : \.file.core.windows.net /user:Azure

How to set up a Linux machine

For Linux machines, make sure the cifs-utils package is installed. Then, create a directory under your mount point.

mkdir /mnt/MyAzureFileShare

Then, mount the service.

sudo mount -t cifs //.file.core.windows.net/ -o vers=3.0,username=,password=,dir_mode=0777,file_mode=0777,serverino

Or create a permanent mount.

sudo bash -c ‘echo “//.file.core.windows.net/ cifs nofail,vers=3.0,username=,password=,dir_mode=0777,file_mode=0777,serverino” >> /etc/fstab’

Adding Azure file shares to a Mac system

For macOS machines, disable SMB packet signing because Azure encrypts the connection end to end. Packet signing also hurts performance.

sudo -s
echo “[default]” >> /etc/nsmb.conf
echo “signing_required=no” >> /etc/nsmb.conf
exit

While you’re in the terminal, mount the share with this command:

mount_smbfs //@.file.core.windows.net/

Then, enter your storage account key as your password and you’re finished.

Azure File Sync keeps shared data in order

To ensure Azure Files runs smoothly, Microsoft developed Azure File Sync to turn local file servers into caches of the master data repository in the Azure file share service. The file sync service handles changes to Azure that happen on your local files and vice versa.

Any applications that might have compatibility or performance issues can use the local file server, while the main repository is centralized in the Azure service. Based on the needs of your organization, you might have a cache/local file server in each branch office or one per country or continent.

Dig Deeper on Microsoft Azure cloud services

Azure PaaS strategy hones in on hybrid cloud, containers

Evaluate
Weigh the pros and cons of technologies, products and projects you are considering.

Microsoft’s PaaS offerings might have a leg-up in terms of support for hybrid deployments, but the vendor still faces tough competition in a quickly evolving app-dev market.


The Azure PaaS portfolio continues to offer a compelling story for companies that need a development environment…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

where legacy applications can move freely between on premises and the cloud. But even as the vendor increasingly embraces hybrid cloud, open source and emerging technologies, such as containers and IoT, it still faces tough competition from the likes of Google and AWS.

Strong foundation

Azure App Service is Microsoft’s flagship PaaS offering, enabling developers to build and deploy web and mobile applications in a variety of programming languages — without having to manage the underlying infrastructure.

But App Service represents just one of many services that Microsoft has rolled out over the years to help developers create, test, debug and extend application code. The company’s Visual Studio line, for example, now includes four product families: Visual Studio Integrated Development Environment, Visual Studio Team Services, Visual Studio Code and Visual Studio App Center, which includes connections to GitHub, Bitbucket and VSTS repositories to support continuous integration.

Microsoft has also created a vast independent software vendor and developer community, and has tightly integrated many of its development tools, according to Jeffrey Kaplan, managing director at THINKstrategies, Inc. Visual Studio and SQL Server, for example, support common design points and feature high levels of integration with App Service.

Microsoft’s Azure PaaS strategy is also unique in its focus on hybrid cloud deployments. Through its Hybrid Connections feature, for example, developers can build and manage cloud applications that access on-premises resources. What’s more, Azure App Service is also available for Azure Stack — Microsoft’s integrated hardware and software platform designed to bring Azure public cloud services to enterprises’ local data centers and simplify the deployment and management of hybrid cloud applications.

Missing pieces

But despite its broad portfolio and hybrid focus, Azure PaaS is not a panacea. While many traditional IT departments have embraced the offering, it hasn’t been as popular in business units, which now drive development initiatives in many organizations, according to Larry Carvalho, research director for IDC’s PaaS practice.

What’s more, organizations that don’t have a large footprint of legacy systems often prefer open source development tools, rather than tools like Visual Studio. Traditionally, Microsoft hasn’t offered support for open source technology as quickly as other cloud market leaders, such as AWS, according to Carvalho. This is likely because competitors like AWS are not weighed down by support for legacy products.

But while, historically, Microsoft’s business model has been antithetical to the open source approach, that’s started to change. The company has made an effort to embrace more open source technologies and recently purchased GitHub, a version control platform founded on the open source code management system Git.

The evolving face of PaaS

The PaaS landscape is evolving rapidly. Rather than traditional VMs, developers increasingly focus on containers, and interest in DevOps continues to rise. In an attempt to align with these trends, Microsoft now offers a managed Kubernetes service on its public cloud and recently added Azure Container Instances to enable developers to spin up new container workloads without having to manage the underlying server infrastructure.

Additionally, enterprises have a growing interest in application development for AI, machine learning and IoT platforms. And while Azure PaaS tools offer support for these technologies, Microsoft still needs to compete against fellow market leaders, AWS and Google — the latter of which has garnered a lot of attention for its development of TensorFlow, an open source machine learning framework.

Dig Deeper on Platform as a Service and cloud computing

How is the future of PowerShell shaping up?

Now that PowerShell is no longer just for Windows — and is an open source project — what are the long-term implications of these changes?

Microsoft technical fellow Jeffrey Snover developed Windows PowerShell based on the parameters in his “Monad Manifesto.” If you compare his document to the various releases of Windows PowerShell, you’ll see Microsoft produced a majority of Snover’s vision. But, now that this systems administration tool is an open source project, what does this mean for the future of PowerShell?

I’ve used PowerShell for more than 12 years and arguably have as good an understanding of PowerShell as anyone. I don’t know, or understand, where PowerShell is going, so I suspect that many of its users are also confused.

When Microsoft announced that PowerShell would expand to the Linux and macOS platforms, the company said it would continue to support Windows PowerShell, but would not develop new features for the product. Let’s look at some of the recent changes to PowerShell and where the challenges lie.

Using different PowerShell versions

While it’s not directly related to PowerShell Core being open source, one benefit is the ability to install different PowerShell versions side by side. I currently have Windows PowerShell v5.1, PowerShell Core v6.0.1 and PowerShell Core v6.1 preview 2 installed on the same machine. I can test code across all three versions using the appropriate console or Visual Studio Code.

PowerShell versions
One benefit of the open source move is Windows PowerShell v5.1, PowerShell Core v6.0.1 and PowerShell Core v6.1 preview 2 can run on the same machine.

How admins benefit from open source PowerShell

The good points of the recent changes to PowerShell include access to the open source project, a faster release cadence and community input.

Another point in favor of PowerShell’s move is that you can see the code. If you can read C#, you can use that skill to track down and report on bugs you encounter. If you have a fix for the problem, then you can submit it.

Studying the code can give you insight into how PowerShell works. What it won’t explain is why PowerShell works the way it does. Previously, Microsoft MVPs and very few other people had access to Windows PowerShell code, but with the PowerShell Core code now available to more people, it can only make it a better product in the long run.

The PowerShell team expects to deliver a new release approximately every six months. The team released PowerShell v6.0 in January 2018. At the time of this article’s publication, version 6.1 is in its third preview release, with the final version expected soon. If the team maintains this release cadence, you can expect v6.2 in late 2018 or early 2019.

A faster release cadence implies quicker resolution of bugs and new features on a more regular basis. The downside to a faster release cadence is that you’ll have to keep upgrading your PowerShell instances to get the bug fixes and new features.

Of the Microsoft product teams, the Windows PowerShell team is one of the most accessible. They’ve been very active in the PowerShell community since the PowerShell v1 beta releases by engaging with users and incorporating their feedback. The scope of that dialog has expanded; anyone can report bug fixes or request new features.

The downside is the expectation that the originator of the request is expected to implement the changes. If you follow the project, you’ll see there are just a handful of community members who are heavily active.

Shortcomings of the PowerShell Core project

This leads us to the disadvantages now that PowerShell is an open-source project. In my view, the problems are:

  • it’s an open source project;
  • there’s no overarching vision for the project;
  • the user community lacks an understanding of what’s happening with PowerShell; and
  • gaps in the functionality.

[embedded content]

PowerShell’s inventor gives a status update on the automation tool

These points aren’t necessarily problems, but they are issues that could impact the PowerShell project in the long term.

Changing this vital automation and change management tool to an open source project has profound implications for the future of PowerShell. The PowerShell Core committee is the primary caretaker of PowerShell. This board has the final say on which proposals for new features will proceed.

At this point in time, the committee members are PowerShell team members. A number of them, including the original language design team, have worked on PowerShell from the start. If that level of knowledge is diluted, it could have an adverse effect on PowerShell.

The PowerShell project page supplies a number of short- to medium-term goals, but I haven’t seen a long-term plan that lays out the future of PowerShell. So far, the effort appears concentrated on porting the PowerShell engine to other platforms. If the only goal is to move PowerShell to a cross-platform administration tool, then more effort should go into bringing the current Windows PowerShell functionality to the other platforms.

Giving the PowerShell community a way to participate in the development of PowerShell is both a strength and a weakness. Some of the proposals show many users don’t understand how PowerShell works.

Giving the PowerShell community a way to participate in the development of PowerShell is both a strength and a weakness. Some of the proposals show many users don’t understand how PowerShell works. There are requests to make PowerShell more like Bash or other shells.

Other proposals seek to change how PowerShell works, which could break existing functionality. The PowerShell committee has done a good job of managing the more controversial proposals, but clearing up long-term goals for the project would reduce requests that don’t fit into the future of PowerShell.

The project is also addressing gaps in functionality. Many of the current Windows PowerShell v5.1 modules will work in PowerShell Core. At the PowerShell + DevOps Global Summit 2018, one demonstration showed how to use implicit remoting to access Windows PowerShell v5.1 modules on the local machine through PowerShell Core v6. While not ideal, this method works until the module authors convert them to run in PowerShell Core.

One gap that needs work is the functionality on Linux and macOS systems. PowerShell Core is missing the cmdlets needed to perform standard administrative tasks, such as working with network adapters, storage, printer management, local accounts and groups.

Availability of the ConvertFrom-String cmdlet would be a huge boost by giving admins the ability to use native Linux commands then turn the output into objects for further processing in PowerShell. Unfortunately, ConvertFrom-String uses code that cannot be open sourced, so it’s not an option currently. Until this functionality gap gets closed, Linux and macOS will be second-class citizens in the PowerShell world.

How to tackle an email archive migration for Exchange Online

Problem solve
Get help with specific problems with your technologies, process and projects.

A move from on-premises Exchange to Office 365 also entails determining the best way to transfer legacy archives. This tutorial can help ease migration complications.


A move to Office 365 seems straightforward enough until project planners broach the topic of the email archive…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

migration.

Not all organizations keep all their email inside their messaging platform. Many organizations that archive messages also keep a copy in a journal that is archived away from user reach for legal reasons.

The vast majority of legacy archive migrations to Office 365 require third-party tools and must follow a fairly standardized process to complete the job quickly and with minimal expense. Administrators should migrate mailboxes to Office 365 first and then the archive for the fastest way to gain benefits from Office 365 before the archive reingestion completes.

An archive product typically scans mailboxes for older items and moves those to longer term, cheaper storage that is indexed and deduplicated. The original item typically gets replaced with a small part of the message, known as a stub or shortcut. The user can find the email in their inbox and, when they open the message, an add-in retrieves the full content from the archive.

Options for archived email migration to Office 365

The native tools to migrate mailboxes to Office 365 cannot handle an email archive migration. When admins transfer legacy archive data for mailboxes, they usually consider the following three approaches:

  1. Export the data to PST archives and import it into user mailboxes in Office 365.
  2. Reingest the archive data into the on-premises Exchange mailbox and then migrate the mailbox to Office 365.
  3. Migrate the Exchange mailbox to Office 365 first, then perform the email archive migration to put the data into the Office 365 mailbox.

Option 1 is not usually practical because it takes a lot of manual effort to export data to PST files. The stubs remain in the user’s mailbox and add clutter.

Option 2 also requires a lot of labor-intensive work and uses a lot of space on the Exchange Server infrastructure to support reingestion.

That leaves the third option as the most practical approach, which we’ll explore in a little more detail.

Migrate the mailbox to Exchange Online

When you move a mailbox to Office 365, it migrates along with the stubs that relate to the data in the legacy archive. The legacy archive will no longer archive the mailbox, but users can access their archived items. Because the stubs usually contain a URL path to the legacy archive item, there is no dependency on Exchange to view the archived message.

Some products that add buttons to restore the individual message into the mailbox will not work; the legacy archive product won’t know where Office 365 is without further configuration. This step is not usually necessary because the next stage is to migrate that data into Office 365.

Transfer archived data

Legacy archive solutions usually have a variety of policies for what happens with the archived data. You might configure the system to keep the stubs for a year but make archive data accessible via a web portal for much longer.

There are instances when you might want to replace the stub with the real message. There might be data that is not in the user’s mailbox as a stub but that users want on occasion.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly. The legacy archive migration software should examine the data within the archive and then run batch jobs to replace stubs with the full messages. In this case, you can use the Exchange Online archive as a destination for archived data that no longer has a stub.

Email archive migration software connects via the vendor API. The software assesses the items and then exports them into a common temporary format — such as an EML file — on a staging server before connecting to Office 365 over a protocol such as Exchange Web Services. The migration software then examines the mailbox and replaces the stub with the full message.

migration dashboard
An example of a third-party product’s dashboard detailing the migration progress of a legacy archive into Office 365.

Migrate journal data

With journal data, the most accepted approach is to migrate the data into the hidden recoverable items folder of each mailbox related to the journaled item. The end result is similar to using Office 365 from the day the journal began, and eDiscovery works as expected when following Microsoft guidance.

For this migration, the software scans the journal and creates a database of the journal messages. The application then maps each journal message to its mailbox. This process can be quite extensive; for example, an email sent to 1,000 people will map to 1,000 mailboxes.

After this stage, the software copies each message to the recoverable items folder of each mailbox. While this is a complicated procedure, it’s alleviated by software that automates the job.

Legacy archive migration offerings

There are many products tailored for an email archive migration. Each has its own benefits and drawbacks. I won’t recommend a specific offering, but I will mention two that can migrate more than 1 TB a day, which is a good benchmark for large-scale migrations. They also support chain of custody, which audits the transfer of all data

TransVault has the most connectors to legacy archive products. Almost all the migration offerings support Enterprise Vault, but if you use a product that is less common, then it is likely that TransVault can move it. The TransVault product accesses source data either via an archive product’s APIs or directly to the stored data. TransVault’s service installs within Azure or on premises.

Quadrotech Archive Shuttle fits in alongside a number of other products suited to Office 365 migrations and management. Its workflow-based process automates the migration. Archive Shuttle handles fewer archive sources, but it does support Enterprise Vault. Archive Shuttle accesses source data via API and agent machines with control from either an on-premises Archive Shuttle instance or, as is more typical, the cloud version of the product.

Dig Deeper on Exchange Online administration and implementation