Part one of this two-part series on Microsoft 365 (formerly Office 365) security weaknessesexamined some of main misconfigurations that cause problems when trying tosecurely operate or migrate to the cloud-based Microsoft 365 suite of services. While knowing the challenges is half the battle, what about addressing those challenges? Based on our work with clients, our research data and a review of available information, Nemertes recommends the following 12 best practices to secure Microsoft 365.
Implement a Microsoft 365 cybersecurity task force. To address known concerns with Microsoft 365, we recommend enterprises form a cybersecurity team focused specifically on Microsoft 365 cybersecurity. This team should be responsible for the following:
educating itself on the known issues;
recommending remediations and best practices;
developing a security-based project plan for the Microsoft 365 migration;
working directly with any third-party providers to ensure migration and implementation align with best practices; and
working directly with Microsoft’s technical experts if issues arise.
Review Microsoft documentation. Microsoft has an extensive library that grows daily, documenting security vulnerabilities — particularly those related to configuration issues. As a regular practice, the task force should review the library. Earlier this year, for example, Microsoft added a recommendation to the repository that businesses should use Domain-based Message Authentication, Reporting and Conformance (DMARC) to validate and authenticate mail servers to ensure destination email systems trust messages sent from company domains to help companies fortify their systems.
Using DMARC with Sender Policy Framework (SFP) and DomainKeys Identified Mail (DKIM) provides additional protection against spoofing and phishing emails. The library has hundreds of recommendations like this. As a result, the task force should familiarize itself with the library’s documentation and, as a regular practice, continue reviewing the library on a regular basis.
Enable and use DMARC, SPF and DKIM. When used together, these three protocols dramatically reduce the risk of spoofing and phishing. Use Microsoft Exchange as your email service provider in this configuration.
Enable multifactor authentication (MFA) by default, at the very least for administrator accounts and, ideally, for all accounts. The May 2019 U.S. Cybersecurity and Infrastructure Security Agency (CISA) report noted that MFA for administrator accounts isn’t enabled by default, yet Azure Active Directory (AD) global administrators in a Microsoft 365 environment have the highest level of administrator privileges at the tenant level. Modifying this configuration to require administrator MFA is a huge step toward ensuring security.
Enable mailbox auditing by default. The CISA report also revealed Microsoft didn’t enable auditing by default in Microsoft 365 prior to January 2019. The Microsoft 365 task force should ensure this step is enabled by default.
Determine if password sync is required. By default, Azure AD Connect integrates on-premises environments with Azure AD when customers migrate to Microsoft 365. In this scenario, the on-premises password overwrites the password in Azure AD. Therefore, if the on-premises AD identity is compromised, then an attacker could move laterally to the cloud when the sync occurs. If password sync is required, the team should carefully think through the implications of a premises-based attack on cloud systems, or vice versa.
Move away from legacy protocols. Several protocols, including Post Office Protocol 3 and Internet Mail Access Protocol 4, don’t effectively support authentication methods such as MFA. CISA recommended moving away from all legacy protocols.
Upgrade all software and OSes prior to migration. Earlier versions of Microsoft software, such as Office 2007, have known security vulnerabilities and weaker protection thresholds. Upgrade all software to current versions prior to migrating to Microsoft 365.
Test all third-party applications before integrating them into Microsoft 365. If you are using Microsoft 365 in conjunction with third-party applications — developed in-house or by outside companies — be sure you conduct solid cybersecurity testing before integrating them with Microsoft 365.
Develop and implement a backup and business continuity plan. Many organizations wrongly assume that, because Microsoft 365 is cloud-based, it is automatically backed up. That’s not the case; Microsoft uses replication rather than traditional data backup methods. As a result, it can’t guarantee an organization’s files will remain available if files are compromised through ransomware or accidental deletion.
Implement cloud-based single sign-on (SSO). Known vulnerabilities in Microsoft 365’s security protocols involve using cross-domain authentication to bypass federated domains. The best approach to mitigating these issues is to deploy SSO as a service from a provider such as identity and access management company Okta or identity security company Ping Identity.
Assess your Microsoft Secure Score and Compliance Score. Microsoft has developed two registries for Microsoft 365: Secure Score and Compliance Score. These registries list hundreds of steps customers should take to improve their overall scores and include a way to indicate whether they’ve done it, not done it yet or accept the risk. Secure Score is aimed at traditional security, such as “Did you enable MFA?” Compliance Score offers a general assessment, as well as regulation-specific assessments, such as GDPR and the California Consumer Privacy Act.
Microsoft 365 security effort requires focus
In summary, Microsoft 365 is peppered with cybersecurity vulnerabilities, in its architecture and design and in the default configuration. The known vulnerabilities and best practices discussed here are just a start. What’s more important is that enterprise technology pros maintain a focused and ongoing cybersecurity effort to protect their environments.
Organizations are facing a lot of pressure to migrate to Microsoft 365. Nemertes believes the platform’s cybersecurity challenges can be overcome with effort and attention. In particular, it is vital to have a Microsoft 365 cybersecurity task force. This is not an optional component of any migration to Microsoft 365. That means companies need to consider the cost and effort involved in creating and maintaining an ongoing Microsoft 365 task force when computing the ROI of migrating to the platform. If the perceived benefit of agility and a cloud-based environment exceeds the cost of maintaining a focused internal group, a move to Microsoft 365 is warranted.
To examine the history of PowerShell requires going back to a time before automation, when point-and-click administration ruled.
In the early days of IT, GUI-based systems management was de rigueur in a Windows environment. You added a new user by opening Active Directory, clicking through multiple screens to fill in the name, group membership, logon script and several other properties. If you had dozens or hundreds of new users to set up, it could take quite some time to complete this task.
To increase IT efficiency, Microsoft produced a few command-line tools. These initial automation efforts in Windows — batch files and VBScript — helped, but they did not go far enough for administrators who needed undiluted access to their systems to streamline how they worked with the Windows OS and Microsoft’s ever-growing application portfolio.
It wasn’t untilPowerShellcame out in 2006 that Microsoft gave administrators something that approximated shell scripting in Unix. PowerShell is both a shell — used for simple tasks such as gathering the system properties on a machine — and a scripting language to execute more advanced infrastructure jobs. Each successive PowerShell release came with more cmdlets, updated functionality and refinements to further expand the administrator’s dominion over Windows systems and its users. In some instances, the only way to make certain adjustments in some products isvia PowerShell.
Today, administrators widely use PowerShell to manage resources both in the data center and in the cloud. It’s difficult to comprehend now, but shepherding a command-line tool through the development gauntlet at a company that had built its brand on the Windows name was a difficult proposition.
Don Jones currently works as vice president of content partnerships and strategic initiatives for Pluralsight, a technology skills platform vendor, but he’s been fully steeped in PowerShell from the start. He co-founded PowerShell.org and has presented PowerShell-related sessions at numerous tech conferences.
Jones is also an established author and hislatest book, Shell of an Idea, gives a behind-the-scenes history of PowerShell that recounts the challenges faced by Jeffrey Snover and his team.
In this Q&A, Jones talks about his experiences with PowerShell and why he felt compelled to cover its origin story.
Editor’s note:This interview has been edited for length and clarity.
What was it like for administrators before PowerShell came along?
Don Jones: You clicked a lot of buttons and wizards. And it could get really painful. Patching machines was a pain. Reconfiguring them was a pain. And it wasn’t even so much the server maintenance — it was those day-to-day tasks.
I worked at Bell Atlantic Network Integration for a while. We had, maybe, a dozen Windows machines and we had one person who basically did nothing but do new user onboarding: creating the domain account and setting up the mailbox. There was just no better way to do it, and it was horrific.
I started digging into VBScript around the mid-1990s and tried to automate some of those things. We had a NetWare server, and you periodically had to log on, look for idle connections and disconnect them to free up a connection for another user if we reached our license limit. I wrote a script to do something a human being was sitting and doing manually all day long.
This idea of automating — that was just so powerful, so tremendous and so life-affirming that it became a huge part of what I wound up doing for my job there and jobs afterward.
Do you remember your introduction to PowerShell?
Jones: It was at a time when Microsoft was being a little bit more free talking about products that they were working on it. There was a decent amount of buzz about this Monad shell, which was its code name. I felt this is clearly going to be the next thing and was probably going to replace VBScript from what they were saying.
I was working with a company called Sapien Technologies at the time. They produce what is probably still the most popular VBScript code editor. I said, ‘We’re clearly going to have to do something for PowerShell,’ and they said, ‘Absolutely.’ And PrimalScript was, I think, the first non-Microsoft tool that really embraced PowerShell and became part of that ecosystem.
That attracted the attention of Jeffrey Snover at Microsoft. He said, ‘We’re going to launch PowerShell at TechEd Europe 2006 in Barcelona, [Spain], and I’d love for you to come up and do a little demo of PrimalScript. We want to show people that this is ready for prime time. There’s a partner ecosystem. It’s the real deal, and it’s safe to jump on board.’
That’s where I met him. That was the first time I got to present at a TechEd and that set up the next large chapter of my career.
Don JonesVice president of content partnerships and strategic initiatives, Pluralsight
What motivated you to write this book?
Jones: I think I wanted to write it six or seven years ago. I remember being either at a TechEd or [Microsoft] Ignite at a bar with [Snover], Bruce Payette and, I think, Ken Hansen. You’re at a bar with the bar-top nondisclosure agreement. And they’re telling these great stories. I’m like, ‘We need to capture that.’ And they say, ‘Yeah, not right now.’
I’m not sure what really spurred me. Partly, because my career has moved to a different place. I’m not in PowerShell anymore. I felt being able to write this history would be, if not a swan song, then a nice bookend to the PowerShell part of my career. I reached out to a couple of the guys again, and they said, ‘You know what? This is the right time.’ We started talking and doing interviews.
As I was going through that, I realized the reason it’s the right time is because so many of them are no longer at Microsoft. And, more importantly, I don’t think any of the executives who had anything to do with PowerShell are still at Microsoft. They left around 2010 or 2011, so there’s no repercussions anymore.
Regarding Jeffrey Snover, do you think if anybody else had been in charge of the PowerShell project that it would have become what it is today?
Jones: I don’t think so. By no means do I want to discount all the effort everyone else put in, but I really do think it was due to [Snover’s] absolute dogged determination, just pure stubbornness.
He said, ‘Bill Gates got it fairly early.’ And even Bill Gates getting it and understanding it and supporting it didn’t help. That’s not how it worked. [Snover] really had to lead them through some — not just people who didn’t get it or didn’t care — but people who were actively working against them. There was firm opposition from the highest levels of the company to make this stop.
Because you got in close to the ground floor with PowerShell, were you able to influence any of its functionality from the outside?
Jones: Oh, absolutely. But it wasn’t really just me. It was all the PowerShell MVPs. The team had this deep recognition that we were their biggest fans — and their biggest critics.
They went out of their way to do some really sneaky stuff to make sure they could get our feedback. Windows Vista moving into Windows 7, there was a lot of secrecy. Microsoft knew it had botched — perceptually if nothing else — the Vista release. They needed Windows 7 to be a win, and they were being really close to the vest about it. For them to show us anything that had anything to do with Windows 7 was verboten at the highest levels of the company. Instead they came up with this idea of the “Windows Vista update,” which was nothing more than an excuse to show us PowerShell version 3 without Windows 7 being in the context.
They wanted to show us workflows. They put us in a room and they not only let us play with it and gave us some labs to run through, but they had cameras running the whole time. They said, ‘Tell us what you think.’
I think nearly every single release of PowerShell from version 2 onward had a readme buried somewhere. They listed the bug numbers and the person who opened it. A ton of those were us: the MVPs and people in the community. We would tell the team, ‘Look, this is what we feel. This is what’s wrong and here’s how you can fix it.’ And they would give you fine-print credit. Even before it went open source, there was probably more community interaction with PowerShell than most Microsoft products.
I came from the perspective of teaching. By the time I was really in with PowerShell, I wasn’t using it in a production environment. I was teaching it to people. My feedback tended to be along the lines of, ‘Look, this is hard for people to grasp. It’s hard to understand. Here’s what you could do to improve that.’ And a lot of that stuff got adopted.
Was there any desire — or an offer — to join Microsoft to work on PowerShell directly?
Jones: If I had ever asked, it probably could have happened. I had had previous dealings with Microsoft, as a contractor, that I really enjoyed.
I applied for a job there — and I did not enjoy how that went down.
My feeling was that I was making a lot more money and having a lot more impact as an independent.
What is your take on PowerShell since it made the switch to an open source project?
Jones: It’s been interesting. PowerShell 6, which was the first cross-platform open source was a big step backward in a lot of ways. Just getting it cross-platform was a huge step. You couldn’t take the core of PowerShell, and, at that point, 11 years of add-on development and bring it all with you at once. I think a lot of people looked at it as an interesting artifact.
Don JonesVice president of content partnerships and strategic initiatives, Pluralsight
[In PowerShell 7], they’ve done so much work to make it more functional. There’s so much parity now across macOS, Linux and Windows. I feel the team tripled down and really delivered and did exactly what they said they were going to do.
I think a lot more people take it seriously. PowerShell is now built into the Kali Linux distribution because it’s such a good tool. I think a lot of really hardcore, yet open-minded, Linux and Unix admins look at PowerShell and — once they take the time to understand it — they realize this is what shells structurally should have been.
I think PowerShell has earned its place in a lot of people’s toolboxes and putting it out there as open source was such a huge step.
Do you see PowerShell ever making any inroads with Linux admins?
Jones: I don’t think they’re the target audience. If you’ve got a tool that does the job, and you know how to use it, and you know how to get it done, that’s fine.
We have a lot of home construction here in [Las] Vegas. I see guys putting walls up with a hammer and nails. Am I going to force you to use a nail gun? No. Are you going to be a lot faster? Yes, if you took a little time to learn how to use it. You never see younger guys with the hammer; it’s always the older guys who’ve been doing this for a long, long time.
I feel that PowerShell has already been through this cycle once. We tried to convince everyone that you needed to use PowerShell instead of the GUI, and a lot of admins stuck with the GUI. That’s a fairly career-limiting move right now, and they’re all finding that out. They’re never going to go any further. The people who picked it up, they’re the ones who move ahead.
The very best IT people reach out for whatever tool you put in front of them. They rip it apart and they try to figure out how is this going make my job better, easier, faster, different, whatever. They use all of them.
You don’t lose points for using PowerShell and Bash. It would be stupid for Linux administrators to fully commit to PowerShell and only PowerShell, because you’re going to run across systems that have this other thing. You need to know them both.
Microsoft has released a lot of administrative tools — you’ve got PowerShell, Office 365 CLI and Azure CLI to name a few. Someone new to IT might wonder where to concentrate their efforts when there are all these options.
Jones: You get a pretty solid command-line tool in the Azure CLI. You get something that’s very purpose-specific. It’s scoped in fairly tightly. It doesn’t have an infinite number of options. It’s a straightforward thing to write tutorials around. You’ve got an entire REST API that you can fire things off at. And if you’re a programmer, that makes a lot more sense to you and you can write your own tools around that.
PowerShell sits kind of in the middle and can be a little bit of both. PowerShell is really good at bringing a bunch of things together. If you’re using the Azure CLI, you’re limited to Azure. You’re not going to use the Azure CLI to do on-prem stuff. PowerShell can do both. Some people don’t have on-prem, they don’t need that. They just have some very simple basic Azure needs. And the CLI is simpler and easier to start with.
Where do you see PowerShell going in the next few years?
Jones: I think you’re going to continue to see a lot of investment both by Microsoft and the open source community. I think the open source people have — except for the super paranoid ones — largely accepted that Microsoft’s purchase of GitHub was not inimical. I think they have accepted that Microsoft is really serious about open source software. I think people are really focusing on making PowerShell a better tool for them, which is really what open source is all about.
I think you’re going to continue to see it become more prevalent on more platforms. I think it will wind up being a high common denominator for hiring managers who understand the value it brings to a business and some of the outcomes it helps achieve. Even AWS has invested heavily in their management layer in PowerShell, because they get it — also because a lot of the former PowerShell team members now work for AWS, including Ken Hansen and Bruce Payette, who invented the language.
I suspect that, in the very long run, it will probably shift away from Microsoft control and become something a little more akin to Mozilla, where there will be some community foundation that, quote unquote, owns PowerShell, where a lot of people contribute to it on an equal basis, as opposed to Microsoft, which is still holding the keys but is very engaged and accepting of community contributions.
I think PowerShell will probably outlive most of its predecessors over the very long haul.
In today’s IT world, you can have workloads on premises and in the cloud. One common denominator for each location is a need to plan for disaster recovery. Azure Site Recovery is one option for administrators who need a way to cover every scenario.
Azure Site Recovery is a service used to protect physical and virtual Windows or Linux workloads outside of your primary data center and its traditional on-premises backup system. During the Azure Site Recovery setup process, you can choose either Azure or another data center for the replication target. In the event of a disaster, such as a power outage or hardware failure, your apps can continue to operate in the Azure cloud to minimize downtime. Azure Site Recovery also supports cloud failover of both VMware and Hyper-V virtual infrastructures.
One of the real advantages of this Azure service for a Windows shop is integration. All the functionality is built right into the admin portal and requires little effort to configure beyond the agent installation, which can be done automatically. Offerings from other vendors, such as Zerto and Veeam work the same way but require additional configuration using a management suite based outside the Azure portal.
Azure Site Recovery pricing
One of the big issues for any platform is cost. Each protected instance costs $25 per month with additional fees for the Azure Site Recovery license, storage in Azure, storage transactions and outbound data transfer. Organizations interested in testing the service can use it for free for the first 31 days.
As with most systems, there are caveats, including how replication and recovery are tied to specific Azure regions depending on the location of the cluster. There is a list of supported configurations on Microsoft’s documentation.
Azure includes the option to failover to an on-premises location, which reduces the cost to $16 per instance. However, this option requires meeting bandwidth requirements that are not a factor in an Azure-to-Azure failover scenario.
Azure Site Recovery uses vaults to store workload dependencies
Most disaster recovery (DR) environments utilize the concept of crash consistent applications, meaning the application fails over as a whole with all its dependencies. In Azure, you store the VM backups, their respective recovery points and the backup policies in a vault.
These vaults should contain all the servers that make up the services required for a successful failover. (You should test before an emergency occurs to make sure it functions expected.) It is possible to fail over individual VMs within a replication group if needed; this used to be an all-or-nothing scenario until recently.
How to create a Recovery Services vault
For this Azure Site Recovery setup tutorial, we’ll cover how to configure VMs for site-to-site replication between regions via the portal.azure.com link.
As with most Azure tools, the Disaster Recovery menu is on the left-hand side with the other Azure services. Under this menu is the Recovery Service vault option. Create one by filling in the fields as shown in Figure 1.
When you have entered all your specifications, click Create to build the vault. The next step is to choose the purpose for the vault. The choice is either for backup or DR.
Next, add the VMs. To start, from the vault choices select Site Recovery.
From the on-premises option, click Replicate Application to open a wizard to add VMs. Next, click Review + Start replication to start the creation and replication process, which can take several minutes. For ease of access and experimentation purposes, I suggest pinning it to your dashboard. Opening the vault provides a health overview of the site and clicking on each item shows details about the replication status as shown in Figure 2.
This completes the creation of a group with two protected VMs. Every VM added to that resource group automatically becomes a protected member of the vault. By default, the DR failover is set to a maximum duration of 24 hours. After the initial configuration, you can adjust the failover duration and snapshot frequency from the Site Recovery Policy – Retention policies page.
The last step is to create a recovery plan. From the vault, select Create recovery plan and then + Recovery plan and give it a name as shown in Figure 3 with our example called MyApplicationRecoveryPlan. You choose the source from either the on-premises location or Azure, and the Azure target.
When complete, opening the plan to verify it works properly by clicking Test for a nondisruptive assessment that checks the replication in an isolated environment. This process can detect any problems related to services and connectivity the application needs to function in a failover setting.
This tutorial covers some of the basic functionality of Azure Site Recovery. For more granular control, there are many more options available to provide advanced functionality.
Virtual machines and containers are both types of virtualized workloads that have more similarities than you may think. Each serves a specific purpose and can significantly increase the performance of your infrastructure — as long as they are employed effectively.
Microsoft unveiled container support in Windows Server 2016, which might have seemed like a novelty feature for many Windows administrators. But now that containers and the surrounding technology — orchestration, networking and storage — has matured on the Windows Server 2019 release, is it time to give containers more thought?
How do you make the decision on when to use VMs vs. containers? Is there a tipping point when you should make a switch? To help steer your decision, let’s cover the three key abilities of containers and virtual machines.
When it comes to weighing the options, the difference in reliability is one of the first questions any engineer will ask. Although uptime ultimately depends on the engineers and engineering behind the technology, you can infer a lot about their dependability by analyzing the security and maintenance costs.
VMs. VMs are big, heavyweight, monoliths. This isn’t a comment about speed, because VMs can be blazingly fast. The reason they’re considered monoliths is because each contains a full stack of technology; virtualized hardware, an operating system and even more software are all layered on top of each other in one package.
The advantage of utilizing VMs becomes apparent when you drill down to the hypervisor. VMs have full isolation between themselves and any other VM running on the same hardware or in the same cluster. This is highly secure; you can’t directly attack one VM in a cluster from another.
The other reliability advantage is longevity. People have been using VMs in Windows production environments for about 20 years. There are a large number of engineers with vast amounts of experience managing, deploying and troubleshooting VMs. If an issue with a VM arises, there’s a good chance it’s not a unique occurrence.
Containers. Containers are lightweight and less hardware-intensive because they aren’t running a full suite of software on top of them. Containers can be thought of as wrappers around a process or applications that can run in a stand-alone fashion.
You can run many containers on the same VM; due to this, you don’t have full isolation in containers. You do have process isolation, but it’s not as absolute as it is with a VM. This can cause some difficulties in spinning up and maintaining containers when determining how to parcel out resources.
Additionally, because containers are so relatively new compared to VMs, you might have trouble finding the engineers with a similar amount of career dedication to their management. There are additional technologies to bring in to help with their administration and orchestration, the learning curve to get started is generally seen as higher compared to more traditional technologies, such as VMs.
Scalability is the capability of the technology to maximize utilization across your environment. When you’re ready for your application to be accessed by tens of thousands of people, scalability is your friend.
VMs. VMs take a long time to spin up and deploy. Cloud technology such as AWS Auto Scaling and Azure Virtual Machine Scale Sets build out clones of the same VM and load-balance across them. While this is one way to reach scale, it’s a little clunky because of the VM spin-up time.
For a one-off application, VMs can host it and work well, but when it comes to reaching the masses, they can fall short. This is particularly true when attempting to use non-cloud-native automation to scale VMs. The sheer time difference between a VM deployment and a container deployment can cause your automation to go haywire.
Containers. Containers were built for scale. You can spin up one or a hundred new containers in milliseconds, which makes automation and orchestration with native cloud tooling a breeze.
Scale is so innate to containers that the real question with scale is, “How far do you want to go?” You can use IaaS on AWS or Azure using your own Kubernetes orchestration, but you can even take this one step further with the PaaS technologies such as AWS Fargate or Azure Container Instances.
Once you have your VMs or containers running in production, you need a way to manage them. Deploying changes, updating software and even rotating technologies all fall under this purview.
VMs. There are scores of third-party tools to manage VMs, such as Puppet, Chef, System Center Configuration Manager and IBM BigFix. Each does software deployment, runs queries on your environment, and even performs more complex desired state configuration tasks. There are also a host of vendor tools to manage your VMs inside VMware, Citrix and Hyper-V.
VMs require care and feeding. Usually when you create a VM, there is a lifecycle it follows from the spin up to its sunset date. In between, it requires maintenance and monitoring. This is contrary to newer methodologies such as DevOps, infrastructure as code and immutable infrastructure. In these paradigms, servers and services are treated like cattle, not pets.
Containers. Orchestration and immutability are the hallmarks of containers. If a container breaks, you kill it and deploy another one without a second thought. There is no backup and restore procedure. Instead of spending time modifying or maintaining your environment, you fix a container by destroying it and creating a new one. VMs, because of the associated time and maintenance costs, simply can’t keep up with containers in this respect.
Containers are tailored for DevOps; containers are a component of the infrastructure that treats developers and infrastructure operators as first-class citizens. By layering the new methodology on new technology, it allows for a faster way to get things done by reducing the complexities tied to workload management.
Which is the way to go?
In the contest of VMs vs. containers, which one wins? The answer depends on your IT team and your use case. There are instances where VMs will continue to have an advantage and others where containers are a better choice. This comparison has just scratched the surface of the technical differences, but there are financial advantages to consider as well.
In a real-world environment, you will likely need both technologies. Monolithic VMs make sense for more solid and stable services such as Active Directory or the Exchange Server platform. For your development team and your homegrown apps utilizing the latest in release pipeline technology, containers will help them get up to speed and scale to the needs of your organization.
Nearly every system administrator has to deal with scheduled tasks. They are incredibly helpful to do something based on various triggers, but they require a lot of manual effort to configure properly.
The benefit of scheduled tasks is you can build one with a deep level of sophistication with trigger options and various security contexts. But where complexity reigns, configuration errors can arise. When you’re developing these automation scripts, you can create a scheduled task with PowerShell to ease the process. Using PowerShell helps standardize the management and setup work involved with intricate scheduled tasks, which has the added benefit of avoiding the usual errors that stem from manual entry.
Build a scheduled task action
At a minimum, a scheduled task has an action, a trigger and a group of associated settings. Once you create the task, you also need to register it on the system. You need to perform each action separately to create a single scheduled task.
To create the action, use the New-ScheduledTaskAction cmdlet which specifies the command to run. Let’s create an action that gets a little meta and invokes a PowerShell script.
The command below gives an example of invoking the PowerShell engine and passing a script to it using all of the appropriate command line switches to make the script run noninteractively. The script file resides on the machine the scheduled task will be running on.
Next, you need a trigger. You have a several values available, but this task will use a specific time — 3 a.m. — to execute this script once. For a full list of options, check out the New-ScheduledTaskTrigger cmdlethelp page.
$Trigger = New-ScheduledTaskTrigger -Once -At 3am
Next, create the scheduled task using the New-ScheduledTask command. This command requires a value for the Settings parameter, even if you’re not using any special. This is why you run New-ScheduledTaskSettingsSet to create an object to pass in here.
$Settings = New-ScheduledTaskSettingsSet
Create the scheduled task
After assigning all the objects as variables, pass each of these variables to the New-ScheduledTask command to create a scheduled task object.
At this point, you have created a scheduled task object in memory. To add the scheduled task on the computer, you must register the scheduled task using the Register-ScheduledTask cmdlet.
The example below registers a scheduled task to run under a particular username. To run the task under a certain user’s context, you have to provide the password. It’s helpful to look at the documentation for the Register-ScheduledTask command to see all the options to use with this cmdlet.
Microsoft will make several changes to the Office 365 platform this year that will affect enterprise users. Email client changes and new features in the Office suite and subscriptions can increase support calls, but administrators can help themselves through training and engagement.
Microsoft, which was once tolerant of customers on older products, is pushing customers to adopt the latest Windows 10 build and Office suite to take advantage of new Office 365 functionality and capabilities. At time of publication, the Office 365 roadmap shows nearly 250 features in development with nearly 150 rolling out. Some of the changes include:
After October 2020, only Office 2019 and Office Pro Plus will be allowed to connect to Office 365 services, such as email on Exchange Online and SharePoint Online;
Microsoft Outlook will receive several changes to its user interface throughout 2020;
Office Groups and Microsoft Teams will be the focus for collaboration tool development;
Office ProPlus is no longer supported in Windows 8.1, Windows 7 or older on the client operating system and Windows Server 2012, 2012 R2 and 2016 on the server side.
Given the number of updates in the works, many administrators realize that the wave of change will affect many of their users, especially if it requires upgrading any legacy Office suite products such as Office 2013, 2016 and even 2010. To ensure a smooth transition with many of the new Office 365 tools and expected changes, IT workers must take several steps to prepare.
Develop an Office 365 or Office 2019 adoption plan
One of the first steps for IT is to plot out a strategy that outlines the upcoming changes and what needs to be done to complete the adoption process. During this step, the IT team must detail the various software changes to implement — upgrades to the Office suite, introduction of Microsoft Teams and other similar items. The adoption plan can define the details around training material, schedules, resources and timelines needed.
Identify platform champions to help encourage adoption
To be more effective when it comes to gaining the trust of their end users and keeping them invested with the upcoming Office 365 roadmap features, administrators must identify a few platform champions within the business to help build support within the end-user groups and outside of IT.
Build excitement around the upcoming changes
Changes are generally met with some resistance from end users, and this is especially the case when it comes to changing tools that are heavily used such as Outlook, Word, Excel and certain online services. To motivate end users to embrace some of the new applications coming out in 2020, administrators must highlight the benefits such as global smart search, a new look and feel for the email client and several enhancements coming in Microsoft Teams.
Be flexible with training materials and methods
Everyone learns differently, so any training content that administrators provide to the end users must come in several formats. Some of the popular delivery mechanisms include short videos, one-page PDF guides with tips and tricks, blog postings and even podcasts. One other option is to outsource the training process by using a third-party vendor that can deliver training material, tests and other content through an online learning system. Some of the groups that offer this service include BrainStorm, Microsoft Learning and Global Knowledge Training.
Monitor progress and highlight success stories
Once IT begins to roll out the adoption plan and the training to the end users, it is important to monitor the progress by performing frequent checks to identify the users actively participating in the training and using the different tools available to them. One way for the administrators to monitor Office activation is through the Office 365 admin portal under the reports section. Some of the Office usage and activation reports will identify who is making full use of the platform and the ones lagging behind who might require extra assistance to build their skills.
Stay on top of the upcoming changes from Microsoft
End users are not the only ones who need training. Given the fast rate that the Office 365 platform changes, IT administrators have a full-time job in continuing to review the new additions and changes to the applications and services. Online resources like Microsoft 365 Roadmap and blog posts by Microsoft and general technology sites provide valuable insights into what is being rolled out and what upcoming changes to expect.
Share stories and keep the door open for continuous conversations
Microsoft Teams and Yammer are highly recommended for administrators to interact with their end users as they are adopting new Office 365 tools. This gives end users a way to share feedback and allows others to join the conversation to help IT gauge the overall sentiment around the changes in Office 365. They also provide IT with an avenue to make some announcements related to major future changes and evaluate how their end users respond.
Adding a Windows Server 2019 domain controller is not complicated, but deciding whether to move this integral infrastructure component to a new version of Windows Server or put it in the cloud is another matter.
There are many ways to perform identity and access management in the enterprise, but the pervading choice for most organizations over the last 20 years has been Active Directory in Windows Server. Active Directory, introduced with Windows 2000 Server, is the umbrella name for the directory service platform that stores sensitive information, organizes users, devices, applications and data across your organization and determines the access level of each. Active Directory helps facilitate single sign-on, which takes your domain credentials and handles the authorization — this determines which resources you have a right to use — for things such as a particular printer on the network or a certain cloud service.
Domain controllers handle user authentication in Active Directory and store key data, such as security certificates, that the Active Directory Domain Services role needs to function. The domain controller is the gateway for administrators to manage Active Directory, which makes it an attractive target for anyone trying to get inside your network.
Microsoft’s push to the cloud
One major initiative from Microsoft is its cloud-based directory listing product called Azure Active Directory (AD). The name might imply this offering is simply Active Directory with an Azure stamp, but that’s not entirely accurate.
Azure AD forms one part of Microsoft’s identity management puzzle in the cloud. It controls authentication for cloud-based resources such as Office 365 and other SaaS apps. Organizations with a Windows Server 2019 domain controller — or one based on earlier Windows Server versions — have the option to sync on-premises data with Azure AD to streamline the authentication process.
To fully emulate on-premises Active Directory in the cloud, customers must have a separate service called Azure Active Directory Domain Services (Azure AD DS). This enables them to set up a managed domain in the cloud. Azure AD DS offers many of the same features as on-premises Active Directory, including domain joins, organizational unit structure and Group Policy.
If Azure AD DS does not have feature parity with on-premises Active Directory, why would an administrator want to switch to a domain controller as a service? For one, Azure has more active platform development with quicker rollout of fixes and new features. Unlike a Windows Server 2019 domain controller, Azure AD DS does not require hands-on management from IT. Microsoft controls the security update deployment process and resource administration. Azure AD DS integrates with Microsoft’s cloud security products, such as the Azure Security Center, for presumably better protection from hack attempts.
However, relying on Microsoft for critical identity and authentication needs can introduce different problems once you relinquish your control. Outages still occur no matter where you locate your infrastructure. On-premises Active Directory systems can avoid or mitigate outages by failing over to a geo-distributed deployment in the event of a disaster, but Azure AD DS doesn’t have this capability. Organizations seeking to emulate this failover ability must put the domain controllers in an Azure IaaS VM. Many organizations are still bound by data governance and other regulatory concerns and cannot risk exposure of sensitive data, even with Azure’s extensive compliance certifications.
Then, consider the potential cost difference: When you set up a Windows Server 2019 domain controller, the licensing fee covers your usage. Microsoft bills the use of Azure AD DS by the hour and, unlike an Azure VM, you can’t push the pause button on a domain Azure AD DS manages. Once you create the domain in Azure, the charges don’t stop until you delete the managed domain.
Windows Server 2019 benefits, caveats
Although Windows Server 2019 turns 2 years old in October, many IT admins still have reservations about moving their Active Directory setup to the new server OS. There is a prevailing attitude that older OSes have been battle-tested and, therefore, should be more reliable. In addition, with older systems, another customer has likely experienced a particular issue you might run up against, so a quick Google search could find a remedy.
While you won’t encounter any Active Directory forest and domain-functional level changes from Windows Server 2016 to Windows Server 2019, a migration to the new operating system comes with overall security improvements and added resiliency to the Hyper-V platform. For example, Microsoft introduced a new feature in virtualized environments that enables administrators to move failover clusters from one domain to another during consolidation efforts. This option didn’t exist prior to Windows Server 2019 and required administrators to remove and rebuild the cluster on the new domain from scratch.
Organizations that use on-premises Exchange should avoid migrating to Windows Server 2019 Active Directory unless they have Exchange 2016 or newer. While this configuration might work with earlier versions of Exchange, it isn’t supported by Microsoft.
This video tutorial by contributor Brien Posey explains how to set up the Windows Server 2019 domain controller. The transcript of these instructions follows.
Transcript – Use a Windows Server 2019 domain controller or go to Azure?
In this video, I will show you how to set up a domain controller in Windows Server 2019.
I’m logged into the Windows Server 2019 desktop. I’m going to go ahead and open Server Manager.
The process of setting up a domain controller is really similar to what you had in the previous Windows Server version.
Go up to Manage and select Add roles and features. This launches the wizard.
Click Next to bypass the Before you beginscreen. I’m taken to the Installation typemenu. I’m prompted to choose Role-based or feature-based installation or Remote Desktop Services installation. Choose the role-based or feature-based installation option and click Next.
I’m prompted to select my server from the pool. There’s only one server in here. This is the server that will become my domain controller. One thing I want to point out is to look at the operating system. This is Windows Server 2019 Datacenter edition; in a few minutes, you’ll see why I’m pointing this out. Click Next.
At the Server Rolesmenu, there are two roles that I want to install: Active Directory Domain Services and the DNS roles. Select the checkbox for Active Directory Domain Services. When I select that checkbox, I’m prompted to add some additional features. I’ll go ahead and select the Add Features button.
I’m also going to select the DNS Server checkbox and, once again, click on Add Features. Click Next.
Click Next on the Features menu. Click Next again on the AD DS menu. Click Next on the DNS menu.
I’m taken to the confirmation screen. It’s a good idea to take a moment and just review everything to make sure that it appears correct. Click Install. After a few minutes, the installation completes.
I should point out that the server was provisioned ahead of time with a static IP address. If you don’t do that, then you’re going to get a warning message during the installation wizard. Click Close.
The next thing that we need to do is to configure this to act as a domain controller. Click on the notifications icon. You can see there is a post-deployment configuration task that’s required. In this case, we need to promote the server to domain controller. Do that by clicking on the link, which opens Active Directory Domain Services configuration wizard.
I’m going to create a new forest, so I’ll click the Add a new forest button. I’m going to call this forest poseylab.com and click Next.
On the domain controller options screen, you’ll notice that the forest functional level is set to Windows Server 2016. There is no Windows Server 2019 option — at least, not yet. That’s the reason that I pointed out earlier that we are indeed running on Windows Server 2019. Leave this set to Windows Server 2016. Leave the default selections on the domain controller capabilities. I need to enter and confirm a password, so I’ll do that and click Next.
Click Next again on the DNS options screen.
The NetBIOS domain name is populated automatically. Click Next.
Go with the default paths for AD DS database, logs and SYSVOL. Click Next.
Everything on the Review optionsscreen appears to be correct, so click Next.
Windows will do a prerequisites check. We have a couple of warnings, but all the prerequisite checks completed successfully, so we can go ahead and promote the server to a domain controller. Click Install to begin the installation process.
After a few minutes, the Active Directory Domain Services and the DNS roles are configured. Both are listed in Server Manager.
Let’s go ahead and switch over to a Windows 10 machine and make sure that we can connect that machine to the domain. Click on the Start button and go to Settings, then go to Accounts. I’ll click on Access work or school then Connect. I’ll choose the option Join this device to a local Active Directory domain. I’m prompted for the domain name, which is poseylab.com. Click Next.
I’m prompted for the administrative name and password. I’m prompted to choose my account type and account name. Click Next and Restart now.
Once the machine restarts, I’m prompted to log into the domain. That’s how you set up an Active Directory domain controller in Windows Server 2019.
With Office 365 becoming Microsoft 365, administrators are wondering what this evolution changes regarding their data protection needs.
As it stands right now, not much has changed from a backup and recovery standpoint. The tools and best practices used for backing up Office 365 are still valid for Microsoft 365 backup.
So, what are some of those best practices? No. 1 is to simply make sure that you are backing up 365. Microsoft only provides infrastructure-level protection for 365. It is up to you to make sure that your data is protected. It’s a similar story with other popular software-as-a-service applications — you must back up your data and not rely on the SaaS providers.
While Microsoft presumably takes steps to prevent data loss related to a catastrophic failure within its data center, the company doesn’t protect you from data loss related to the accidental deletion or overwriting of your data. Therefore, it’s up to you to make sure that you have Microsoft 365 backup.
Periodically check that your backup tools can back up all the requiredMicrosoft 365 data. Early on, a lot of the Office 365 backup products focused solely on Exchange Server, with some also supporting SharePoint. However, there are other data sources that need protection, such as OneDrive and Azure Active Directory.
Choose a Microsoft 365 backup product that will enable you to recover data at a granular level. At a minimum, you need to be able to restore individual files, email messages and SharePoint sites. You shouldn’t have to restore an entire Exchange mailbox just to recover a single message.
Your Microsoft 365 backup product should enable you to restore your data to a location of your choosing. In most cases, you will probably be restoring data back to the Microsoft 365 cloud. Certain circumstances may require you to restore to a different Microsoft 365 subscription, or perhaps even to a server that is running on premises.
Finally, backup and restore operations are often tightly intertwined with an organization’s compliance initiatives. Make sure that your backup software meets the requiredservice-level agreements and that it provides the level of reporting needed to satisfy compliance auditors.
It’s not news that these are unprecedented times. No one has seen anything like the novel coronavirus — dubbed COVID-19 — or the global response to the virus before. Many people worry how this situation will evolve and how it will affect economies, careers and personal bottom lines.
I don’t have a crystal ball, but I have been writing resumes since the last major economic crisis — the banking crisis of 2008 — and the years following. While this situation is substantially different, some lessons learned 12 years ago may be relevant to network engineer careers today.
The known vs. the unknown of the economic fallout
The long-term economic fallout after the crisis passes is unknown. It’s possible it will be bad and last a couple of years. It may be shorter. There’s no way to tell. Either way, here’s some good news: Even in 2008 and 2009, people still got interviews, and they still got jobs. While it was extremely competitive and salaries were lower than in a better economy, jobs for good candidates existed.
Whatever happens, it’s best to be ready, whether the situation becomes extremely difficult or resolves relatively quickly. To quote Louis Pasteur, “Fortune favors the prepared mind,” and that’s true in every aspect of one’s network engineer career.
So, then as now, the best advice for a networking professional concerned about a future downturn in the economy is to be prepared and be the best candidate possible.
4 pieces of network engineer career advice
Preparing is not as hard as it sounds. Here are a few tips to increase network engineers’ chances if the coronavirus downturn lasts more than a few months and they want to hold onto their jobs or search for new ones if they were laid off.
Excel in one’s work. That sounds obvious, but it must be stated. Go above and beyond. Get the job done, even if it’s tough or boring. Build a reputation as the go-to networking professional — the one people go to for the tough jobs. Build a reputation as the professional who always gets the job done. This can increase network engineers’ chances of keeping their jobs even if layoffs happen.
Communicate effectively and cooperatively with end users, technical peers and management. The importance of this can’t be overstated. Being a network engineer who connects into the broader organization and who is a networking professional people know, like and respect can make an enormous difference if there is a reduction in the workforce.
Think business value. This also can’t be overstated. Network engineers should always think about what their work delivers to the business, users and customers. If engineers can get numbers for how their work has improved operations, they should keep those handy because numbers pop when a hiring authority reads a resume — they stand out to the eye and make accomplishments immediately clear.
Keep resumes updated regularly. Throughout one’s network engineer career, engineers should ensure their resumes are current, communicate key accomplishments and are written to express the challenge, action and results — or CAR — idea rather than simple lists of duties and responsibilities, which are relatively similar for many network engineer careers.
Globally, people hope this crisis will pass fairly soon, without the sort of long-term economic damage seen in 2008. Yet, however things develop, if network engineers keep the above points in mind, their careers will be more secure, and chances of finding a new opportunity will be significantly greater.
This was last published in May 2020
Dig Deeper on Networking careers and certifications
One of the biggest changes in Exchange Server 2019 from previous versions of the messaging platform is Microsoft supports — and recommends — deployments on Server Core.
For those who are comfortable with this deployment model, the option to install Exchange 2019 on a server without a GUI is a great advance. You can still manage the system with the Exchange Admin Console from another computer, so you really don’t lose anything when you install Exchange this way. The upside to installing Exchange on a Server Core machine is a smaller attack surface with less resource overhead. For some IT shops, because Server Core has no GUI, it can present a challenge when troubleshooting issues.
This tutorial will explain how to install Exchange 2019 on Server Core in a lab environment instead of a production setting. The following instructions will work the same for either setting, but users new to Server Core should practice a few deployments in a lab before trying the deployment for real.
For the sake of brevity, this tutorial does not cover the aspects related to the installation of the Server Core operating system — it is identical to other Windows Server build processes — and the standard Exchange Server sizing exercises and overall deployment planning.
After installing a new Server Core 2019 build, you see the logon screen in Figure 1.
Most of the setup work on the server will come from PowerShell. After logging in, load PowerShell with the following command:
Next, this server needs an IP address. To check the current configuration, use the following command:
This generates the server’s IP address configuration for all its network interfaces.
Your deployment will have different information, so select an interface and use the New-NetIPAddress cmdlet to configure it. Your command should look something to the following:
The next step is to download Exchange Server 2019 and the required prerequisites to get the platform running. Be sure to check Microsoft’s prerequisites for Exchange 2019 mailbox servers on Windows Server 2019 Core from this link because they have a tendency to change over time. The Server Core 2019 deployment needs the following software installed from the Microsoft link:
.NET Framework 4.8 or later
Visual C++ Redistributable Package for Visual Studio 2012
Visual C++ Redistributable Package for Visual Studio 2013
Next, run the following PowerShell command to install the Media Foundation:
Lastly, install the Unified Communications Managed API 4.0 from the following link.
To complete the installation process, reboot the server with the following command:
Installing Exchange Server 2019
To proceed to the Exchange 2019 installation, download the ISO and mount the image:
The installation should complete with Exchange Server 2019 operating on Windows Server Core.
Managing Exchange Server 2019 on Server Core
Once you complete the installation and reboot the server, you’ll find the same logon screen as displayed in Figure 1.
This can be somewhat discomforting for an administrator who spent their whole career working with the standard Windows GUI interface. There isn’t much you can do to manage your Exchange Server from the command prompt.
Your first management option is to use PowerShell locally on this server. From the command prompt, enter:
You need to run this command each time to use PowerShell on the headless Exchange Server when you want to run the Exchange Management Shell. To streamline this process, you can add that cmdlet to your PowerShell profile so that the Exchange Management snap-in loads automatically when you start PowerShell on that server. To find the location of your PowerShell profile, just type $Profile in PowerShell. That file may not exist if you’ve never created it; to do this, open Notepad.exe and create a file with the name $Profile and enter that previous Add-PSSnapin command.
The more reasonable management option for your headless Exchange Server is to never log into the server locally. You can run the Exchange Admin Center from a workstation to remotely manage the Exchange 2019 deployment.