Tag Archives: administrators

Meltdown and Spectre bugs dominate January Patch Tuesday

Administrators have their work cut out for them on multiple fronts after a serious security flaw surfaced that affects most operating systems and devices.

The Meltdown and Spectre vulnerabilities encompass most modern CPUs — from Intel-based server systems to ARM processors in mobile phones — that could allow an attacker to pull sensitive data from memory. Microsoft mitigated the flaws with several out-of-band patches last week, which have been folded into the January Patch Tuesday cumulative updates. Full protection from the exploits will require a more concerted effort from administrators, however.

Researchers only recently discovered the flaws that have existed for approximately 20 years. The Meltdown (CVE-2017-5754) and Spectre (CVE-2017-5753 and CVE-2017-5715) exploits target the CPU’s pre-fetch functionality that anticipates the feature or code the user might use, which puts relevant data and instructions into memory. A CPU exploit written in JavaScript from a malicious website could pull sensitive information from the memory of an unpatched system.

“You could leak cookies, session keys, credentials — information like that,” said Jimmy Graham, director of product management for Qualys Inc., based in Redwood City, Calif.

In other January Patch Tuesday releases, Microsoft updated the Edge and Internet Explorer browsers to reduce the threat from Meltdown and Spectre attacks. Aside from these CPU-related fixes, Microsoft issued patches for 56 other vulnerabilities with 16 rated as critical, including a zero-day exploit in Microsoft Office (CVE-2018-0802).

Microsoft’s attempt to address the CPU exploits had an adverse effect on some AMD systems, which could not boot after IT applied the patches. This issue prompted the company to pull those fixes until it produces a more reliable update.

Most major cloud providers claim they have closed this security gap, but administrators of on-premises systems will have to complete several deployment stages to fully protect their systems.

“This is a nasty one,” said Harjit Dhaliwal, a senior systems administrator in the higher education sector who handles patching for his environment. “This is not one of your normal vulnerabilities where you just have a patch and you’re done. Fixing this involves a Microsoft patch, registry entries and firmware updates.”

Administrators must ensure they have updated their anti-virus product so  it has the proper registry setting otherwise they cannot apply the Meltdown and Spectre patches. Windows Server systems require a separate registry change to enable the protections from Microsoft’s Meltdown and Spectre patches. The IT staff must identify the devices under their purview and collect that information to gather any firmware updates from the vendor. Firmware updates will correct two exploits related to Spectre. Microsoft plugged the Meltdown vulnerability with code changes to the kernel.

Dhaliwal manages approximately 5,000 Windows systems, ranging from laptops to Windows Server systems, with some models several years old. He is exploring a way to automate the firmware collection and deployment process, but certain security restrictions make this task even more challenging. His organization requires BitLocker on all systems, which must be disabled to apply a firmware update, otherwise he could run into encryption key problems.

“This is not going to be an overnight process,” Dhaliwal said.

How expansive is Meltdown and Spectre?

Attacks that use the Meltdown and Spectre exploit a bug with how many CPUs execute address space layout randomization. The difference between the two vulnerabilities is the kind of memory that is presented to the attacker. Exploits that use the flaws can expose data that resides in the system’s memory, such as login information from a password manager.

Microsoft noted Meltdown and Spectre exist in many processors — Intel, AMD and ARM — and other operating systems, including Google Android and Chrome, and Apple iOS and macOS.  Apple reportedly has closed the vulnerabilities in its mobile phones, while the status of Android patching varies depending on the OEM. Meltdown only affects Intel processors, and the Spectre exploit works with processors from Intel, AMD and ARM, according to researchers.

Virtualized workloads may require fine-tuning

Some administrators have confirmed early reports that the Meltdown and Spectre patches from Microsoft affect system performance.

 Dave Kawula, principal consultant at TriCon Elite Consulting, applied the updates to his Windows Server 2016 setup and ran the VM Fleet utility, which runs a stress test with virtualized workloads on Hyper-V and the Storage Spaces Direct pooled storage feature. The results were troubling, with preliminary tests showing a performance loss of about 35%, Kawula said.

 “As it stands, this is going to be a huge issue,” he said. “Administrators better rethink all their virtualization farms, because Meltdown and Spectre are throwing a wrench into all of our designs.”

Intel has been updating its BIOS code since the exploits were made public, and the company will likely refine its firmware to reduce the impact from the fix, Graham said.

For more information about the remaining security bulletins for January Patch Tuesday, visit Microsoft’s Security Update Guide.

Tom Walat is the site editor for SearchWindowsServer. Write to him at twalat@techtarget.com or follow him @TomWalatTT on Twitter.

How does Data Protection Manager 2016 save and restore data?

on the DPM server. But administrators have flexibility to put those backups on storage that is located — and partitioned — elsewhere.

To get started, IT administrators install a DPM agent on every computer to protect, then add that machine to a protection group in DPM. A protection group is a collection of computers that all share the same protection settings or configurations, such as the group name, protection policy, disk target and replica method.

After the agent installation and configuration process, DPM produces a replica for every protection group member, which can include volumes, shares, folders, Exchange storage groups and SQL Server databases. System Center Data Protection Manager 2016 builds replicas in a provisioned storage pool.

After DPM generates the initial replicas, its agents track changes to the protected data and send that information to the DPM server. DPM will then use the change journal to update the file data replicas at the intervals specified by the configuration. During synchronization, any changes are sent to the DPM server, which applies them to the replica.

DPM also periodically checks the replica for consistency with block-level verification and corrects any problems in the replica. Administrators can set recovery points for a protection group member to create multiple recoverable versions for each backup.

Application data backups require additional planning

Application data protection can vary based on the application and the selected backup type. Administrators need to be aware that certain applications do not support every DPM backup type. For example, Microsoft Virtual Server and some SQL Server databases do not support incremental backups.

Administrators need to be aware that certain applications do not support every DPM backup type.

For a synchronization job, System Center Data Protection Manager 2016 tracks application data changes and moves them to the DPM server, similar to an incremental backup. Updates are combined with the base replica to form the complete backup.

For an express full backup job, System Center Data Protection Manager 2016 uses a complete Volume Shadow Copy Service snapshot, but transfers only changed blocks to the DPM server. Each full backup creates a recovery point for the application’s data.

Generally, incremental synchronizations are faster to backup but can take longer to restore. To balance the time needed to restore content, DPM will periodically create full backups to integrate any collected changes, which speeds up a recovery. DPM can support up to 64 recovery points for each member of a protection group. However, DPM can also support up to 448 full backups and 96 incremental backups for each full backup.

The DPM recovery process is straightforward regardless of the backup type or target. Administrators select the desired recovery point with the Recovery Wizard in the DPM Administrator Console. DPM will restore the data from that point to the desired target or destination. The Recovery Wizard will denote the location and availability of the backup media. If the backup media — such as tape — is not available, the restoration process will fail.

Prevent Exchange Server virtualization deployment woes

are other measures administrators should take to keep the email flowing.

In my work as a consultant, I find many customers get a lot of incorrect information about virtualizing Exchange. These organizations often deploy Exchange on virtual hardware in ways that Microsoft does not support or recommend, which results in major performance issues. This tip will explain the proper way to deploy Exchange Server on virtual hardware and why it’s better to avoid cutting-edge hypervisor features.

When is Exchange Server virtualization the right choice?

The decision to virtualize a new Exchange deployment would be easy if the only concerns were technical. This choice gets difficult when politics enter the equation.

Email is one of the more visible services provided by an IT department. Apart from accounting systems, companies rely on email services more than other information technology. Problems with email availability can affect budgets, jobs — even careers.  

Some organizations spend a sizable portion of the IT department budget on the storage systems that run under the virtual platform. It may be a political necessity to use those expensive resources for high-visibility services such as messaging even when it is less expensive and overall a better technical answer to deploy Exchange on dedicated hardware. While I believe that the best Exchange deployment is almost always done on physical hardware — in accordance with the Preferred Architecture guidelines published by the Exchange engineering team — a customer’s requirements might steer the deployment to virtualized infrastructure.

How do I size my virtual Exchange servers?

Microsoft recommends sizing virtual Exchange servers the same way as physical Exchange servers. My recommendations for this procedure are:

  • Use the Exchange Server Role Requirements Calculator as if the intent was to build physical servers.
  • Take the results, and create virtual servers that are as close as possible to the results from the calculator.
  • Turn off any advanced virtualization features in the hypervisor.

Why should I adjust the hypervisor settings?

Some hypervisor vendors say that the X or Y feature in their product will help the performance or stability of virtualized Exchange. But keep in mind these companies want to sell a product. Some of those add-on offerings are beneficial, some are not. I have seen some of these vaunted features cause terrible problems in Exchange. In my experience, most stable Exchange Server deployments do not require any fancy virtualization features.

What virtualization features does Microsoft support?

Microsoft’s support statement for virtualization of Exchange 2016 is lengthy, but the essence is to make the Exchange VMs as close to physical servers as possible.

Microsoft does not support features that move a VM from one host to another unless the failover event results in cold boot of the Exchange Server. The company does not support features that allow resource sharing among multiple VMs of virtualized Exchange.

Where are the difficulties with Exchange Server virtualization?

The biggest problem with deploying Exchange on virtual servers is it’s often impossible to follow the proper deployment procedures, specifically with the validation of storage IOPS of a new Exchange Server with Jetstress. This tool checks that the storage hardware delivers enough IOPS to Exchange for a smooth experience.

Generally, a virtual host will use shared storage for the VMs it hosts. Running Jetstress on a new Exchange VM on that storage setup will cause an outage for other servers and applications. Due to this shared arrangement, it is difficult to gauge whether the storage equipment for a virtualized Exchange Server will provide sufficient performance.  

While it’s an acceptable practice to run Exchange Server on virtual hardware, I find it often costs more money and performs worse than a physical deployment. That said, there are often circumstances outside of the control of an Exchange administrator that require the use of virtualization.

To avoid trouble, try not to veer too far from Microsoft’s guidelines. The farther you stray from the company’s recommendations, the more likely you are to have problems.

December Patch Tuesday closes year on a relatively calm note

Administrators were greeted with a subdued December Patch Tuesday, a quiet end to what had been a somewhat tumultuous year early in 2017.

Of the 32 unique Common Vulnerabilities and Exposures (CVEs) that Microsoft addressed, just three patches were directly related to Windows operating systems. While not a critical exploit, the patch for CVE-2017-11885, which affects Windows client and server operating systems, is where administrators should focus their attention.

The patch is for a Remote Procedure Call (RPC) vulnerability for machines with the Routing and Remote Access service (RRAS) enabled. RRAS is a Windows service that allows remote workers to use a virtual private network to access internal network resources, such as files and printers.

“Anyone who has RRAS enabled is going to want to deploy the patch and check other assets to make sure RRAS is not enabled on any devices that don’t use it actively to prevent the exploitation,” said Gill Langston, director of product management at Qualys Inc., based in Redwood City, Calif.

The attacker triggers the exploit by running a specially crafted application against a Windows machine with RRAS enabled.

“Once the bad actor is on the endpoint, they can then install applications and run code,” Langston said. “They establish a foothold in the network, then see where they can spread. The more machines you have under your control, the more ability you have to move laterally within the organization.”

In addition, desktop administrators should roll out updates promptly to apply 19 critical fixes that affect the Internet Explorer and Edge browsers, Langston said.

“The big focus should be on browsers because of the scripting engine updates Microsoft seems to release every month,” he said. “These are all remote-code execution type vulnerabilities, so they’re all critical. That’s obviously a concern because that’s what people are using for browsing.”

Fix released for Windows Malware Protection Engine flaw

On Dec. 6, Microsoft sent out an update to affected Windows systems for a Windows Malware Protection Engine vulnerability (CVE-2017-11937). This emergency repair closed a security hole in Microsoft’s antimalware application, affecting systems on Windows 7, 8.1 and 10, and Windows Server 2016. Microsoft added this correction to the December Patch Tuesday updates.

“The fix happened behind the scenes … but it was recommended [for] administrators using any version of the Malware Protection Engine that it’s set to automatically update definitions and verify that they’re on version 1.1.14405.2, which is not vulnerable to the issue,” Langston said.

OSes that lack the update are susceptible to a remote-code execution exploit if the Windows Malware Protection Engine scanned a specially crafted file, which would give the attacker a range of access to the system. That includes the ability to view and delete data, and create a new account with full user rights.

Other affected Microsoft products include Exchange Server 2013 and 2016, Microsoft Forefront Endpoint Protection, Microsoft Security Essentials, Windows Defender and Windows Intune Endpoint Protection.

“Microsoft uses the Forefront engine to scan incoming email on Exchange 2013 and Exchange 2016, so they were part of this issue,” Langston said.

Lessons learned from WannaCry

Microsoft in May surprised many in IT when the company released patches for unsupported Windows XP and Windows Server 2003 systems to stem the tide of WannaCry ransomware attacks. Microsoft had closed this exploit for supported Windows systems in March, but it took the unusual step of releasing updates for OSes that had reached end of life.

Many of the Windows malware threats from early 2017 spawned from exploits found in the Server Message Block (SMB) protocol, which is used to share files on the network. The fact that approximately 400,000 machines got bit by the ransomware bug showed how difficult it is for IT to keep up with patching demands.

“WannaCry woke people back up to how critical it is to focus on your patch cycles,” Langston said.

More than three months elapsed between the time Microsoft first patched the SMB vulnerability in March that WannaCry exploited and when the Petya ransomware — which used the same SMB exploit — continued to compromise people. Some administrators might be lulled into a false sense of security from the cumulative update servicing model and delay the patching process, Langston said.

“They may delay because the next rollup will cover the updates they missed, but then that’s more time those machines are unprotected,” he said.

For more information about the remaining security bulletins for December Patch Tuesday, visit Microsoft’s Security Update Guide.

Tom Walat is the site editor for SearchWindowsServer. Write to him at twalat@techtarget.com or follow him @TomWalatTT on Twitter.

Monitor Active Directory replication via PowerShell

breaks down, administrators need to know quickly to prevent issues with the services and applications that Active Directory oversees.

It is important to monitor Active Directory replication to ensure the process remains healthy. Larger organizations that use Active Directory typically have several domain controllers that rely on replication to synchronize networked objects — users, security groups, contacts and other information — in the Active Directory database. Changes in the database can be made at any domain controller, which must then be duplicated to the other domain controllers in an Active Directory forest. If the changes are not synchronized to a particular domain controller — or all domain controllers — in an Active Directory site, users in that location might encounter problems.

For example, if an administrator applies a security policy setting via a Group Policy Object to all workstations, all domain controllers in a domain should pick up the GPO changes. If one domain controller in a particular location fails to receive this update, users in that area will not receive the security configuration.

Why does Active Directory replication break?

Active Directory replication can fail for several reasons. If network ports between the domain controllers are not open or if the connection object is missing from a domain controller, then the synchronization process generally stops working.

Since domain controllers rely on the domain name system, if their service records are missing, the domain controllers will not communicate with each other, which causes a replication failure.

Check Active Directory replication status manually

There are many ways to check the Active Directory replication status manually.

Administrators can run the following string using the command-line repadmin utility to show the replication errors in the Active Directory forest:
repadmin /replsum /bysrc /bydest /errorsonly

Administrators can also use the Get-ADReplicationPartnerMetadata PowerShell cmdlet to check the replication status, which is used in the script further in this article.

Use a script to check replication health

While larger organizations might have an enterprise tool, such as System Center Operations Manager, to monitor Active Directory, a PowerShell script can be a helpful supplement to alert administrators on the replication status. Because so much of a business relies on a properly functioning Active Directory system, it can’t hurt to implement this script and have it run every day via a scheduled task. If the script finds an error, it will send an alert via email.

The system must meet a few requirements before executing the script:

  • It runs on a computer that reaches all domain controllers.
  • It is recommended to use a computer that runs Windows Server 2012 R2 or a Windows 10 computer joined to a domain in the Active Directory forest.
  • The computer has the Active Directory PowerShell modules installed.

How does the script work?

The PowerShell script uses the Get-ADReplicationPartnerMetadata cmdlet, which connects to a primary domain controller emulator in the Active Directory forest and then collects the replication metadata for each domain controller.

The script checks the value of the LastReplicationResult attribute for each domain controller entry. If the value of LastReplicationResult is zero for any domain controller, the script considers this a replication failure. If this error is found, the script executes the Send-MailMessage cmdlet to send the email with a copy of the report file in a CSV file. The script stores the replication report in C:TempReplStatus.CSV.

The settings in the script should be modified to use the email address to send the message along with the subject line and message body.

PowerShell script to check replication status

The following PowerShell script helps admins monitor Active Directory for these replication errors and delivers the findings via email. Be sure to modify the email settings in the script.

$ResultFile = “C:TempReplStatus.CSV”

$ADForestName = “TechTarget.com”

$GetPDCNow =Get-ADForest $ADForestName | Select-Object -ExpandProperty RootDomain | Get-ADDomain | Select-Object -Property PDCEmulator

$GetPDCNowServer = $GetPDCNow.PDCEmulator

$FinalStatus=”Ok”

 

Get-ADReplicationPartnerMetadata -Target * -Partition * -EnumerationServer $GetPDCNowServer -Filter {(LastReplicationResult -ne “0”)} | Select-Object LastReplicationAttempt, LastReplicationResult, LastReplicationSuccess, Partition, Partner, Server | Export-CSV “$ResultFile” -NoType -Append -ErrorAction SilentlyContinue

 

$TotNow = GC $ResultFile

$TotCountNow = $TotNow.Count

IF ($TotCountNow -ge 2)

{

    $AnyOneOk = “Yes”

    $RCSV = Import-CSV $TestCSVFile

    ForEach ($AllItems in $RCSV)

    {

        IF ($AllItems.LastReplicationResult -eq “0”)

        {

            $FinalStatus=”Ok”

            $TestStatus=”Passed”

            $SumVal=””

            $TestText=”Active Directory replication is working.”

        }

        else

        {

            $AnyGap = “Yes”

            $SumVal = “”

            $TestStatus = “Critical”

            $TestText=”Replication errors occurred. Active Directory domain controllers are causing replication errors.”

            $FinalStatus=”NOTOK”           

            break

        }

    }

}

$TestText

 

IF ($FinalStatus -eq “NOTOK”)

{

    ## Since some replication errors were reported, start email procedure here…

 

### START – Modify Email parameters here

$message = @”                                

Active Directory Replication Status

 

Active Directory Forest: $ADForestName

                                  

Thank you,

PowerShell Script

“@

 

$SMTPPasswordNow = “PasswordHere”

$ThisUserName = “UserName”

$MyClearTextPassword = $SMTPPasswordNow

$SecurePassword = Convertto-SecureString –String $MyClearTextPassword –AsPlainText –force

$ToEmailNow =”EmailAddressHere”

$EmailSubject = “SubjectHere”

$SMTPUseSSLOrNot = “Yes”

$SMTPServerNow = “SMTPServerName”

$SMTPSenderNow = “SMTPSenderName”

$SMTPPortNow = “SMTPPortHere”

 

### END – Modify Email parameters here

 

$AttachmentFile = $ResultFile

 

$creds = new-object -typename System.Management.Automation.PSCredential -argumentlist “$ThisUserName”, $SecurePassword

Send-MailMessage -Credential $Creds -smtpServer $SMTPServerNow -from $SMTPSenderNow -Port $SMTPPortNow -to $ToEmailNow -subject $EmailSubject -attachment $AttachmentFile -UseSsl -body $message

}

When the script completes, it generates a file that details the replication errors.

Replication error report
The PowerShell script compiles the Active Directory replication errors in a CSV file and delivers those results via email.

Administrators can run this script automatically through the Task Scheduler. Since the script takes about 10 minutes to run, it might be best to set it to run at a time when it will have the least impact, such as midnight.

With AI-based cloud management tools, context is king

Administrators who struggle to get deeper insight into cloud infrastructure and application performance have a new ally: artificial intelligence.

Some emerging and legacy IT vendors have infused AI technology into their cloud management tools. While their feature sets — such as the ability to analyze host performance, optimize costs and set up alerts — sound similar to those found in more traditional third-party management tools, these AI-based platforms reach a new level of sophistication, providing greater granularity and broader context, according to IT pros.

Travis Perkins PLC, a retail provider for the home improvement and construction markets based in the U.K., uses Dynatrace’s AI-based performance monitoring platform for its on-premises and Amazon Web Services (AWS) environments. Rather than focus on higher-level metrics related to host servers or instances, the tool reports more granularly on aspects like Java runtime code and errors, said Abdul Rahman Al-Tayib, e-commerce DevOps team leader at the company. This enables his team to perform faster and more precise root-cause analysis when something goes wrong, and better assess the overall impact any issues might have on the business.

“When it comes down to investigating or looking into specific elements of performance where we have had challenges, rather than having to do the investigation manually, [Dynatrace combines] it all into one report,” Al-Tayib said. “So, it tells you, ‘This service here failed to fire, and, therefore, it caused this series of events, which was then related back to [a disruption at your] customer.’ You can immediately see where the challenge is.”

To initiate this root-cause analysis, users install a Dynatrace agent on their host machine to identify the various dependencies between resources and help correlate certain events with any issues that arise, explained Alois Reitbauer, chief technology strategist at the company, based in Waltham, Mass.

“If you have a host that is running out of CPU, and the service running on that host has a response-time problem, [the tool can tell] these are related to each other,” Reitbauer said.

More sophisticated anomaly detection, or identifying when an IT service is performing in an abnormal way, is another feature that makes AI-based management tools stand out. To do this, the Dynatrace tool performs auto-baselining — an automatic process that assesses baseline, or standard, system performance by applying different algorithms for metrics such as response time, failure rate and throughput.

After the tool extrapolates what normal performance looks like, it alerts IT teams to any deviations from that behavior. To avoid being bombarded with alerts, users can further specify performance thresholds, and the tool also applies algorithms to assess criticality.

“If I have two hosts that have infrastructure problems … I obviously care more about the problem that might be with a checkout function for a cart in an e-commerce application than the other one that maybe does some background batch processing,” Reitbauer said. “[That] user context, from an infrastructure case, is of main importance.”

This ability for AI-powered cloud management tools to weed out noncritical alerts has been a boon to other users, as well. According to a network and infrastructure capacity planner at a cloud storage provider that uses AWS for its back-end infrastructure, that capability was one of the main reasons his company adopted an AI-based cloud management tool called YotaScale.

[An AI-enabled cloud management tool] tells you, ‘This service here failed to fire, and, therefore, it caused this series of events, which was then related back to [a disruption at your] customer.’ You can immediately see where the challenge is.
Abdul Rahman Al-Tayibe-commerce DevOps team leader, Travis Perkins PLC

The capacity planner, who asked to remain anonymous, conducted evaluations on several third-party cloud management tools, but found that YotaScale allowed him to “suppress a lot of the noise” that can come with those tools’ alerts and recommendations.

For example, a company might spin up some AWS instances for a new research and development project, and those instances tend to have low utilization as the project ramps up, he said. Third-party cloud management tools might recommend to right-size those instances or reserve them via an AWS Reserved Instance, but in this case, those suggestions are irrelevant.

“That’s not how we would really do things in a bootstrapping scenario, where we are trying to bring up a new test or project, and so I’m going to ignore those,” he said.

The benefit of the AI layer in tools such as YotaScale is to analyze IT infrastructure through the lens of various business departments or units, according to the Menlo Park, Calif., company’s CEO, Asim Razzaq. In the example above, that’s through the lens of a research and development team.

“We map that enterprise, organizational way of looking at things to the infrastructure,” Razzaq said. “And then, within that context, deliver optimization [recommendations] and anomaly detection.”

The YotaScale tool achieves this business context via user input. Users adjust certain parameters and dismiss recommendations that don’t fit, teaching the tool to detect what’s most relevant over time.

AI replacing humans? Not so fast

One overarching benefit of these AI-based cloud management tools is they reduce the need for humans to perform a lot of this analysis on their own. But even the most sophisticated tools won’t provide the same level of insight — at least not yet — as an IT professional with 20 years of industry experience, said Chris Wilder, analyst at Moor Insights & Strategy.

“These algorithms will be smarter and smarter based on the anomalies they find, but they still don’t have the experience a person would,” Wilder said. “Data, in my opinion, is not a replacement for human expertise. It’s just something to augment it.”

These AI capabilities are still in their early phases, agreed Jay Lyman, analyst at 451 Research. But they will eventually become a must-have for infrastructure management tool vendors.

“We’ll get to a point before too long where every provider is going to have to have some sort of machine learning and AI in their automation,” Lyman said. “I think it will become pretty much a check-box item.”

Will PowerShell Core 6 fill in missing features?

Administrators who have embraced PowerShell to automate tasks and manage systems will need to prepare themselves…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

as Microsoft plans to focus its energies in the open source version called PowerShell Core.

All signs from Microsoft indicate it is heading away from the Windows-only version of PowerShell, which the company said it will continue to support with critical fixes — but no further upgrades. The company plans to release PowerShell Core 6 shortly. Here’s what admins need to know about the transition.

What’s different with PowerShell Core?

PowerShell Core 6 is an open source configuration management and automation tool from Microsoft. As of this article’s publication, Microsoft made a release candidate available in November. PowerShell Core 6 represents a significant change for administrators because it shifts from a Windows-only platform to accommodate heterogeneous IT shops and hybrid cloud networks. Microsoft’s intention is to give administrative teams a single tool to manage Linux, macOS and Windows systems.

What features are not in PowerShell Core?

PowerShell Core runs on .NET Core and uses .NET Standard 2.0, the latter is a common library that helps make some current Windows PowerShell modules work in PowerShell Core.

As a subset of the .NET Framework, PowerShell Core misses out on some useful features in Windows PowerShell. For example, workflow enables admins to execute tasks or retrieve data through a sequence of automated steps. This feature is not in PowerShell Core 6. Similarly, tasks such as sequencing, checkpointing, resumability and persistence are not available in PowerShell Core.

A few other features missing from PowerShell Core 6 are:

  • Windows Presentation Foundation: This is the group of .NET libraries that enable coders to build UIs for scripts. It offers a common platform for developers and designers to work together with standard tools to create Windows and web interfaces.
  • Windows Forms: In PowerShell 5.0 for Windows, the Windows Forms feature provides a robust platform to build rich client apps with the GUI class library on the .NET Framework. To create a form, the admin loads the System.Windows.Forms assembly, creates a new object of type system.windows.forms and calls the ShowDialog method. With PowerShell Core 6, administrators lose this capability.
  • Cmdlets: As of publication, most cmdlets in Windows PowerShell have not been ported to PowerShell Core 6. However, the compatibility with .NET assemblies enables admins to use the existing modules. Users on Linux are limited to modules mostly related to security, management and utility. Admins on that platform can use the PowerShellGet in-box module to install, update and discover PowerShell modules. PowerShell Web Access is not available for non-Windows systems because it requires Internet Information Services, the Windows-based web server functionality.
  • PowerShell remoting: Microsoft ports Secure Socket Shell to Windows, and SSH is already popular in other environments. That means SSH-based remoting for PowerShell is likely the best option for remoting tasks. Modules such as Hyper-V, Storage, NetTCPIP and DnsClient have not been ported to PowerShell Core 6, but Microsoft plans to add them.

Is there a new scripting environment?

For Windows administrators, the PowerShell Integrated Scripting Environment (ISE) is a handy editor that admins use to write, test and debug commands to manage networks. But PowerShell ISE is not included in PowerShell Core 6, so administrators must move to a different integrated development environment.

Microsoft recommends admins use Visual Studio Code (VS Code). VS Code is a cross-platform tool and uses web technologies to provide a rich editing experience across many languages. However, VS Code lacks some of PowerShell ISE’s features, such as PSEdit and remote tabs. PSEdit enables admins to edit files on remote systems without leaving the development environment. Despite VS Code’s limitations, Windows admins should plan to migrate from PowerShell ISE and familiarize themselves with VS Code.

What about Desired State Configuration?

Microsoft offers two versions of Desired State Configuration: Windows PowerShell DSC and DSC for Linux. DSC helps administrators maintain control over software deployments and servers to avoid configuration drift.

Microsoft plans to combine these two options into a single cross-platform version called DSC Core, which will require PowerShell Core and .NET Core. DSC Core is not dependent on Windows Management Framework (WMF) and Windows Management Instrumentation (WMI) and is compatible with Windows PowerShell DSC. It supports resources written in Python, C and C++.

Debugging in DSC has always been troublesome, and ISE eased that process. But with Microsoft phasing out ISE, what should admins do now? A Microsoft blog says the company uses VS Code internally for DSC resource development and plans to release instructional videos that explain how to use the PowerShell extension for DSC resource development.

PowerShell Core 6 is still in its infancy, but Microsoft’s moves show the company will forge ahead with its plan to replace Windows PowerShell. This change brings a significant overhaul to the PowerShell landscape, and IT admins who depend on this automation tool should pay close attention to news related to its development.

Dig Deeper on Microsoft Windows Scripting Language

Windows administrators contend with call to the cloud

As Microsoft pushes its cloud through a variety of avenues, Windows administrators find themselves grappling with more down-to-earth problems.

Many companies and their worn-down IT staffs, tethered by increasing costs and workloads associated with legacy equipment, might not be able to resist the call of the cloud much longer. As Microsoft CEO Satya Nadella beckons organizations with talk of digital transformation powered by the company’s Azure platform, it must be tempting for some admins, who can imagine the day they can pitch their servers — and their corresponding maintenance headaches — into the dumpster.

PowerShell, the script-based automation tool into which Microsoft has poured significant resources the last several years, can alleviate some of the pain associated with these maintenance tasks. As organizations’ infrastructures continue to expand and pull in different operating systems, namely Linux, Microsoft touts the open source PowerShell Core as the conduit through which IT will find administrative nirvana. But for many admins who get burned out from the constant demands of their jobs, it can be difficult to find the time to learn a new way to manage and configure systems.

SearchWindowsServer reached out to its contributors for their thoughts on Microsoft’s recent moves and whether they will ease the various challenges facing many IT departments.

Cloud-friendly a misnomer for some admins

Stuart BurnsStuart Burns

Stuart Burns: A lot of Windows shops do not fully utilize automation technology. Linux administrators went through the same thing, but are further along the curve.

The trend with Microsoft server products is a shift from GUI-style management to a command-line-first approach. Old-style Windows administrators must learn to write scripts with a certain level of proficiency. But when everything is reduced to a PowerShell script that runs against an Azure environment, what is left for the administrator to do?

As the world shifts to infrastructure as code and software-defined data centers, the Windows administrator that wants to stay relevant must know how to code and handle the cloud as well as they know their current on-premises infrastructure.

Look for Microsoft make inroads with cloud offerings

Adam FowlerAdam Fowler

Adam Fowler: Despite all the hype around Azure and Office 365, many companies still need the basics, such as a file share.

Microsoft’s spin is to put file shares in the cloud with Azure File Sync. The service offers similar abilities to the distributed file-system service, but takes it several steps further. Azure File Sync keeps recently accessed files local, while older files remain in Azure. Windows administrators can set it up for multiple sites and not worry about what data goes where.

I also like what I’ve seen about the new server tool, Project Honolulu, that compiles numerous management features in a nice web interface. It includes utilities such as Server Manager, Hyper-Converged Cluster Manager and Failover Cluster Manager for on-premises systems. While it’s very early in the project’s development, it shows promise that Microsoft has not forgotten its Windows Server customers who are not PowerShell aficionados.

The future will bring more hybrid interoperability. The Operations Management Suite (OMS) offering, for example, has a lot of server support. OMS is an option as the hub for log shipping, data analysis, along with health checks of the servers and the applications they run.

Is the chasm between Microsoft and its customers growing?  

Jonathan HassellJonathan Hassell

Jonathan Hassell: Looking ahead, I expect Microsoft to further push its Azure services and cloud management services in general, including a de-emphasis on System Center in favor of Intune. I do not expect System Center Configuration Manager to last another five years.

There’s a disconnect between where big corporations are and where Microsoft is in terms of tech progress — and that gap is widening. Yes, Azure’s range of services is impressive. If you want to drop a quarter-million to call yourself a hybrid cloud user with Azure Stack, then fine. But there are still mainframes around. There are still critical line-of-business apps that run on Windows Server 2008 — even some on Windows Server 2003.

I also expect more navel-gazing about why Microsoft feels compelled to update Windows every six months in its Semi-Annual Channel. None of the IT people I talk to want that.

Server channels attempt to cater to two crowds

Brien Posey: One of the more intriguing recent developments for Windows Server admins is Microsoft’s new dual-channel release model, which addresses differing needs in the customer base.

Windows administrators generally fall into two different camps. On one side, there are administrators who prefer nonfrequent, well-tested, monolithic Windows Server releases. Microsoft has taken this approach over the last 20 years, with a new Windows Server version every two to three years.

Brien PoseyBrien Posey

On the other side of the equation are Windows Server admins who want to be on the bleeding edge of technology. For them, new Windows Server features cannot come quickly enough. They see frequent updates as a key to achieve business agility and maintain a competitive advantage.

In an effort to satisfy both sides, Microsoft now has two release channels for Windows Server.

This approach is the only way that Microsoft can make customers happy. The only question is how easy it would be for a company to switch channels. That might not end up being a cheap or easy thing to do.

Cloud-hosted apps catching on to meet user demand

With a variety of available services, now’s a good time for IT administrators to consider whether cloud-hosted apps are a good option.

Offerings such as Citrix XenApp Essentials and Amazon Web Services (AWS) AppStream allow IT to stream desktop applications from the cloud to users’ endpoints. Workflow and automation provider IndependenceIT also has a new offering, AppServices, based on Google Cloud Platform. Organizations adopt these types of services to get benefits around centralized management, scalability and more.

More organizations are considering cloud-hosted apps, because IT needs to become a service provider to meet the growing application demands of both external customers and internal users, said Agatha Poon, research director at 451 Research.

“You get requirements from different teams, and all want to have quicker ways to get applications, quicker ways to deploy services,” Poon said. “So, then, you need some sort of mechanism … to support that.”

What application hosting services offer

Application streaming services are an alternative to on-premises application virtualization, in which organizations host applications in their own data centers.

XenApp Essentials and AppStream place an application in the cloud and let the IT admins assign a group of users to it. But just delivering applications through the cloud is not enough, and the app hosting service should also provide a way to manage the lifecycle of the app publishing. Thus, IT is left with connecting data assets to the app and has to set controls for where users are allowed to move the data.  

Some app streaming services require organizations to use another set of tools for those management tasks. For example, in the case of AWS, IT must manually configure storage using the Amazon Simple Storage Service and connect it back to AppStream if they want additional storage.  

Swizznet, a hosting provider for accounting applications, adopted Independence IT’s AppServices in September 2016 to deliver apps internally and to customers. The company moved away from XenApp Essentials because IndependenceIT provided more native management capabilities, said Mike Callan, CEO of Swizznet, based in Seattle.

We wanted the ability to automatically scale and spin additional servers.
Mike CallanCEO, Swizznet

“We wanted the ability to automatically scale and spin additional servers, where we could essentially have that capability automated instead of paying engineers to do that,” Callan said.

Citrix’s business problems over the past few years were also a factor in making the switch, Callan said.

“Citrix is, unfortunately, just a company more or less in disarray, so they haven’t been able to keep up with the value proposition that they once had,” he added.

Application streaming services can also help organizations deliver apps that they don’t have the resources to host on-premises. Cornell University has used Amazon AppStream 2.0 since early 2017 and took advantage of the new GPU-powered features that aim to reduce the cost of delivering graphics-intensive apps.

These features have opened up more kinds of software that Cornell can deliver to students, said Marty Sullivan, a DevOps cloud engineer at the university in Ithaca, N.Y. Software such as ANSYS, Dassault Systemes Solidworks, and Autodesk AutoCAD and Inventor help students and faculty run simulations and design mechanical parts, but they only perform well when a GPU is available.

“[Departments] will be able to deliver these specialized pieces of software without having to redevelop them for another platform,” Sullivan said.

Google Cloud Platform
The different services within Google Cloud Platform

Cloud market pushes app hosting forward

When it comes to cloud infrastructure services, Google ranked third behind AWS and Microsoft in Gartner’s 2017 Magic Quadrant. But Google Cloud Platform made the deployment of AppServices easy, Callan said. He was able to go through the auto-deployment quick-start guide and set it up himself in just a couple days.

The increasing reliance on cloud services and the rise of workers using multiple devices to get their jobs done are driving the app streaming trend. Providing company-approved cloud-hosted apps for such employees makes deployment and management easier. IT admins don’t have to physically load any apps on the machines, nor do the employees with the machines need to be present for IT to keep tabs on the usage and security of those apps.

Don’t get hung up on Office 365 Cloud PBX pitfalls

For IT administrators, the value of Microsoft’s Office 365 Cloud PBX service is that it consolidates telephony…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

services with email messages and cloud storage in one consumable portal. But make no mistake, this is not plug-and-play.

Admins must be sure their in-house business technology is compatible with the service to get all of its features. Office 365 Cloud PBX with public switched telephone network (PSTN) dialing capabilities enables workers to use Skype for Business Online to:

  • place, receive, transfer and mute calls;
  • click a name in the address book and call the contact; and
  • use mobile devices, a headset with a laptop or PC or an IP phone that works with Skype for Business.

However, the real benefit is that Cloud PBX integrates those features into the Office 365 portal. Admins manage all the Office 365 services, which include mailboxes and licenses, in one place and need only contact one vendor should a problem arise. But like any move to a cloud service, it requires planning and preparation.

Here are some benefits of Office 365 Cloud PBX and tips on how to easily transition to the cloud service.

Office 365 now fully replicates on premises

Many IT admins use the administrative console to handle some of the major applications within Office 365, such as Exchange, SharePoint, licensing and Skype for Business.

But Office 365 didn’t fully replace on-premises servers until Microsoft included a PBX service in the E5 subscription plan. Office 365 Cloud PBX includes critical features, such as call queues and an automated attendant, to make the service more comparable — and, therefore, a full-blown replacement — to Exchange Server for businesses.

Microsoft catches up on needed features

Businesses expect modern unified communications (UC)  platforms to offer advanced features, such as collaboration tools, mobility, call routing, hunt groups, instant messaging, presence technology, voicemail on the go and portability to take an extension or direct inward dialing anywhere users want. Businesses wish to use these platforms as a service and don’t expect to purchase hardware other than the clients’ handsets.

However, many admins found that Office 365 E5’s early release fell short. The main complaint was that it lacked two essential features: automated attendant functionality and call queues.

Office 365 didn’t fully replace on-premises servers until Microsoft included a PBX service in the E5 subscription plan.

Microsoft finally released those capabilities for general Office 365 tenants in April 2017. The company offered Skype for Business Online as a complete, hosted option with enterprise features and functions that are comparable to its on-premises counterpart. This means IT administrators don’t deal with the complexities and challenges of an on-premises voice over IP (VoIP) and keep the crucial features that the enterprise needs.

Microsoft will replace Skype for Business Online with Microsoft Teams likely by 2020, a problematic development for companies that rely on the former for telephony services.

IT considerations before a move

The introduction of a cloud-based UC system requires planning and preparation. Consider the following checklist before you bring Office 365 Cloud PBX into the business.

Avoid points of failure: Like an email server, a phone is a critical communication component. Before you install Office 365 Cloud PBX, make sure your system has multiple reliable network connections. For example, a manufacturing firm located in a rural area can’t switch its phone system to the cloud without this redundancy.

Look into new handsets: Before an organization replaces its existing VoIP with Skype for Business, IT needs to determine if the legacy handsets work with Office 365 Cloud PBX. Microsoft supports several hardware vendors, but Skype for Business with PSTN might not be compatible with some handsets. Check your firmware requirements.

Consider compliance requirements: Security is always a concern when an enterprise moves data into the cloud. Office 365 provides functionality, such as specific rules and policies, to help enterprises meet compliance obligations in email messages, archives and e-discovery. Skype for Business includes similar capabilities to archive and search for messages and interactions. In addition, admins can access detailed audit trails on communications for security reviews.

Monitor usage to manage costs: IT admins that oversee corporate mobile devices should know how to monitor data usage; it helps them stay on budget, and it identifies which resources each user consumes. Similarly, Skype for Business offers domestic and international plans with a set number of minutes. IT admins should examine several reports to monitor those plans and manage costs.

Next Steps

Survey the entire landscape before an Office 365 move

Vendors struggle with mobile unified communications

Steps to use Skype for Business in your business