Tag Archives: Server

Windows Server 2008 end of life: Is Azure the right path?

As the Windows Server 2008 end of life inches closer, enterprises should consider which retirement plan to pursue before security updates run out.

As of Jan. 14, Microsoft will end security updates for Windows Server 2008 and 2008 R2 machines that run in the data center. Organizations that continue to use these server operating systems will be vulnerable because hackers will inevitably continue to look for weaknesses in them, but Microsoft will not — except in rare circumstances — provide fixes for those vulnerabilities. Additionally, Microsoft will not update online technical content related to these operating systems or give any free technical support.

Although there are benefits to upgrading to a newer version of Windows Server, there may be some instances in which this is not an option. For example, your organization might need an application that is not compatible with or supported on newer Windows Server versions. Similarly, there are situations in which it is possible to migrate the server to a new operating system, but not quickly enough to complete the process before the impending end-of-support deadline.

Microsoft has a few options for those organizations that need to continue running Windows Server 2008 or 2008 R2. Although the company will no longer give updates for the aging operating system through the usual channels, customers can purchase extended security updates.

You can delay Windows Server 2008 end of life — if you can afford it

Those who wish to continue using Windows Server 2008 or 2008 R2 on premises will need Software Assurance or a subscription license to purchase extended updates. The extended updates are relatively expensive, or about 75% of the cost of a current version Windows Server license annually. This is likely Microsoft’s way of trying to get customers to migrate to a newer Windows Server version because the extended security updates cost almost as much as a Windows Server license.

The other option for those organizations that need to continue running Windows Server 2008 or 2008 R2 is to migrate those servers to the Azure cloud. Organizations that decide to switch those workloads to Azure will receive free extended security updates for three years.

Servers often have multiple dependencies, and you will need to address these as part of the migration planning.

Know what a move to Azure entails

Before migrating a Windows Server workload to the cloud, it is important to consider the pros and cons of making the switch to Azure. The most obvious benefit is financial and gives you a few years to run this OS without the hassle of having to pay for extended security updates.

Another benefit to the migration to Azure is a reduction in hardware-related costs. Windows Server 2008 was the first Windows Server version to include Hyper-V, but many organizations opted to install Windows Server 2008 onto physical hardware rather than virtualizing it. If your organization runs Windows Server 2008/2008 R2 on a physical server, then this is a perfect opportunity to retire the aging server hardware.

If your Windows Server 2008/2008 R2 workloads are virtualized, then moving those VMs to Azure can free up some capacity on the virtualization hosts for other workloads.

Learn about the financial and technical impact

One disadvantage to operating your servers in Azure is the cost. You will pay a monthly fee to run Windows Server 2008 workloads in the cloud. However, it is worth noting that Microsoft offers a program called the Azure Hybrid Benefit, which gives organizations with Windows Server licenses 40% off the cost of running eligible VMs in the cloud. To get an idea of how much your workloads might cost, you can use a calculator and find more details at this link.

Another disadvantage with moving a server workload to Azure is the increased complexity of your network infrastructure. This added complication isn’t limited just to the migrating servers. Typically, you will have to create a hybrid Active Directory environment and also create a VPN that allows secure communications between your on-premises network and the Azure cloud.

Factor in these Azure migration considerations

For organizations that decide to migrate their Windows Server 2008 workloads to Azure, there are a number of potential migration issues to consider.

Servers often have multiple dependencies, and you will need to address these as part of the migration planning. For instance, an application may need to connect to a database that is hosted on another server. In this situation, you will have to decide whether to migrate the database to Azure or whether it is acceptable for the application to perform database queries across a WAN connection.

Similarly, you will have to consider the migration’s impact on your internet bandwidth. Some of your bandwidth will be consumed by management traffic, directory synchronizations and various cloud processes. It’s important to make sure your organization has enough bandwidth available to handle this increase in traffic.

Finally, there are differences between managing cloud workloads and ones in your data center. The Azure cloud has its own management interface that you will need to learn. Additionally, you may find your current management tools either cannot manage cloud-based resources or may require a significant amount of reconfiguring. For example, a patch management product might not automatically detect your VM in Azure; you may need to either create a separate patch management infrastructure for the cloud or provide the vendor with a path to your cloud-based resources.

Go to Original Article
Author:

For Sale – Vortexbox 2.4 Intel Atom Server – 64gb SSD, Plex Server, Roon Server, Squeezebox Server

For sale is my old music and movies server (now replaced by an Intel NUC running ROCK for Roon and a Nvidia Shield for Plex).

It started off life as a LIV Concepts (now Innuos) Zen 1tb Vortexbox in 2012.

Spec now is:

Intel Atom 1.66ghz (from memory, lost the eBay page I had saved telling me the exact specs)
2gb RAM (ditto)
64gb SSD for operating system (will need either an additional 3.5″ HDD installing, or a NAS mounting)
Blu Ray disc drive
Vortexbox 2.4 software installed, running Plex server, Squeezebox Server and Roon.
MakeMKV installed for Blu Ray and DVD ripping.

Amazingly for a 2012 machine it ran my 1000 album Roon server perfectly.

You may want to re-install the software as I’ve been into the Linux code and mounted my NAS etc, and do a fresh install of Vortexbox (or whatever else you might want to do).

This probably isn’t a plug and play job, but if you like tinkering, it’s yours for £60 plus postage.

Go to Original Article
Author:

How to manage Server Core with PowerShell

After you first install Windows Server 2019 and reboot, you might find something unexpected: a command prompt.

While you’re sure you didn’t select the Server Core option, Microsoft now makes it the default Windows Server OS deployment for its smaller attack surface and lower system requirements. While you might remember DOS commands, those are only going to get you so far. To deploy and manage Server Core, you need to build your familiarity with PowerShell to operate this headless flavor of Windows Server.

To help you on your way, you will want to build your knowledge of PowerShell and might start with the PowerShell integrated scripting environment (ISE). PowerShell ISE offers a wealth of features for the novice PowerShell user, including auto complete of commands to context-colored commands to step you through the scripting process. The problem is PowerShell ISE requires a GUI or the “full” Windows Server. To manage Server Core, you have the command window and PowerShell in its raw form.

Start with the PowerShell basics

To start, type in powershell to get into the environment, denoted by the PS before the C: prompt. A few basic DOS commands will work, but PowerShell is a different language. Before you can add features and roles, you need to set your IP and domain. It can be done in PowerShell, but this is laborious and requires a fair amount of typing. Instead, we can take a shortcut and use sconfig to compete the setup. After that, we can use PowerShell for additional administrative work.

PowerShell uses a verb-noun format, called cmdlets, for its commands, such as Install-WindowsFeature or Get-Help. The verbs have predefined categories that are generally clear on their function. Some examples of PowerShell cmdlets are:

  • Install: Use this PowerShell verb to install software or some resource to a location or initialize an install process. This would typically be done to install a windows feature such as Dynamic Host Configuration Protocol (DHCP).
  • Set: This verb modifies existing settings in Windows resources, such as adjusting networking or other existing settings. It also works to create the resource if it did not already exist.
  • Add: Use this verb to add a resource or setting to an existing feature or role. For example, this could be used to add a scope onto the newly installed DHCP service.
  • Get: This is a resource retriever for data or contents of a resource. You could use Get to present the resolution of the display and then use Set to change it.

To install DHCP to a Server Core deployment with PowerShell, use the following commands.

Install the service:

Install-WindowsFeature –name 'dhcp'

Add a scope for DHCP:

Add-DhcpServerV4Scope –name "Office" –StartingRange 192.168.1.100 -EndRange 192.168.1.200 -SubnetMask 255.255.255.0

Set the lease time:

Set-DHCPSet-DhcpServerv4Scope -ScopeId 192.168.1.100 -LeaseDuration 1.00:00:00

Check the DHCP IPv4 scope:

Get-DhcpServerv4Scope

Additional pointers for PowerShell newcomers

Each command has a purpose and means you have to know the syntax, which is the hardest part of learning PowerShell. Not knowing what you’re looking for can be very frustrating, but there is help. The Get-Help displays the related commands for use with that function or role.

Part of the trouble for new PowerShell users is this can still be overwhelming to memorize all the commands, but there is a shortcut. As you start to type a command, the tab key auto-completes the PowerShell commands. For example, if you type Get-Help R and press the tab key, PowerShell will cycle through the commands, such as the command Remove-DHCPServerInDC, see Figure 1. When you find the command you want and hit enter, PowerShell presents additional information for using that command. Get-Help even supports wildcards, so you could type Get-Help *dhcp* to get results for commands that contain that phrase.

Get-Help command
Figure 1. Use the Get-Help command to see the syntax used with a particular PowerShell cmdlet.

The tab function in PowerShell is a savior. While this approach is a little clumsy, it is a valuable asset in a pinch due to the sheer number of commands to remember. For example, a base install of Windows 10 includes Windows PowerShell 5.1 which features more than 1,500 cmdlets. As you install additional PowerShell modules, you make more cmdlets available.

There are many PowerShell books, but do you really need them? There are extensive libraries of PowerShell code that are free to manipulate and use. Even walking through a Microsoft wizard gives the option to create the PowerShell code for the wizard you just ran. As you learn where to find PowerShell code, it becomes less of a process to write a script from scratch but more of a modification of existing code. You don’t have to be an expert; you just need to know how to manipulate the proper fields and areas.

Outside of typos, the biggest stumbling block for most beginners is not reading the screen. PowerShell does a mixed job with its error messages. The type is red when something doesn’t work, and PowerShell will give the line and character where the error occurred.

In the example in Figure 2, PowerShell threw an error due to the extra letter s at the end of the command Get-WindowsFeature. The system didn’t recognize the command, so it tagged the entire command rather than the individual letter, which can be frustrating for beginners.

PowerShell error message
Figure 2. When working with PowerShell on the command line, you don’t get precise locations of where an error occurred if you have a typo in a cmdlet name.

The key is to review your code closely, then review it again. If the command doesn’t work, you have to fix it to move forward. It helps to stop and take a deep breath, then slowly reread the code. Copying and pasting a script from the web isn’t foolproof and can introduce an error. With some time and patience, and some fundamental PowerShell knowledge of the commands, you can get moving with it a lot quicker than you might have thought.

Go to Original Article
Author:

Consider these Office 365 alternatives to public folders

As more organizations consider a move from Exchange Server, public folders continue to vex many administrators for a variety of reasons.

Microsoft supports public folders in its latest Exchange Server 2019 as well as Exchange Online, but it is pushing companies to adopt some of its newer options, such as Office 365 Groups and Microsoft Teams. An organization pursuing alternatives to public folders will find there is no direct replacement for this Exchange feature. There reason for this is due to the nature of the cloud.

Microsoft set its intentions early on under Satya Nadella’s leadership with its “mobile first, cloud first” initiative back in 2014. Microsoft aggressively expanded its cloud suite with new services and features. This fast pace meant that migrations to cloud services, such as Office 365, would offer a different experience based on the timing. Depending on when you moved to Office 365, there might be different features than if you waited several months. This was the case for migrating public folders from on-premises Exchange Server to Exchange Online, which evolved over time and also coincided with the introduction of Microsoft Teams, Skype for Business and Office 365 Groups.

The following breakdown of how organizations use public folders can help Exchange administrators with their planning when moving to the new cloud model on Office 365.

Organizations that use public folders for email only

Public folders are a great place to store email that multiple people within an organization need to access. For example, an accounting department can use public folders to let department members use Outlook to access the accounting public folders and corresponding email content.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web.

Office 365 offers similar functionality to public folders through its shared mailbox feature in Exchange Online. A shared mailbox stores email in folders, which is accessible by multiple users.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web. This allows users to connect from their smartphones or a standard browser to review email going to the shared mailbox. This differs from public folder access which requires opening the Outlook client.

Organizations that use public folders for email and calendars

For organizations that rely on both email and calendars in their public folders, Microsoft has another cloud alternative that comes with a few extra perks.

Office 365 Groups not only lets users collaborate on email and calendars, but also stores files in a shared OneDrive for Business page, tasks in Planner and notes in OneNote. Office 365 Groups is another option for email and calendars made available on any device. Office 365 Groups owners manage their own permissions and membership to lift some of the burden of security administration from the IT department.

Microsoft provides migration scripts to assist with the move of content from public folders to Office 365 Groups.

Organizations that use public folders for data archiving

Some organizations that prefer to stay with a known quantity and keep the same user experience also have the choice to keep using public folders in Exchange Online.

The reasons for this preference will vary, but the most likely scenario is a company that wants to keep email for archival purposes only. The migration from Exchange on-premises public folders requires administrators to use Microsoft’s scripts at this link.

Organizations that use public folders for project communication and data sharing repository

The Exchange public folders feature is excellent for sharing email, contacts and calendar events. For teams working on projects, the platform shines as a way to centralize information that’s relevant to the specific project or department. But it’s not as expansive as other collaboration tools on Office 365.

Take a closer look at some of the other modern collaboration tools available in Office 365 in addition to Microsoft Teams and Office 365 Groups, such as Kaizala. These offerings extend the organization’s messaging abilities to include real-time chat, presence status and video conferencing.

Go to Original Article
Author:

How to Select a Placement Policy for Site-Aware Clusters

One of the more popular failover clustering enhancements in Windows Server 2016 and 2019 is the ability to define the different fault domains in your infrastructure. A fault domain lets you scope a single point of failure in hardware, whether this is a Hyper-V host (a cluster node), its enclosure (chassis), its server rack or an entire datacenter. To configure these fault domains, check out the Altaro blog post on configuring site-aware clusters and fault domains in Windows Server 2016 & 2019. After you have defined the hierarchy between your nodes, chassis, racks, and sites then the cluster’s placement policies, failover behavior, and health checks will be optimized. This blog will explain the automatic placements policies and advanced settings you can use to maximize the availability of your virtual machines (VMs) with site-aware clusters.

Site-Aware Placement Based on Storage Affinity

From reading the earlier Altaro blog about fault-tolerance, you may recall that the resiliency is created by distributing identical (mirrored) storage spaces direct (S2D) disks across the different fault domains.  Each node, chassis, rack or site may contain a copy of a VM’s virtual hard disks. However, you always want the VM to be in the same site as its disk for performance reasons to avoid having the I/O transmitted across distance. In the event that a VM is forced to start in a separate site from its disk, then it will automatically live migrate the VM to the same site as its disk after about a minute.  With site-awareness, the automatic enforcement of storage affinity between a VM and its disk is given the highest site placement priority.

Configuring Preferred Sites with Site-Aware Clusters

If you have configured multiple sites in your infrastructure, then you should consider which site is your “primary” site and which should be used as a backup. Many organizations will designate their primary site as the location closest to their customers or with the best hardware, and the secondary site as the failover location which may have limited hardware to only support critical workloads.  Some enterprises may deploy identical datacenters, and distribute specific workloads to each location to balance their resources. If you are splitting your workloads across different sites you can assign each clustered workload or VM (cluster group) a preferred site. Let’s say that you want your US-East VM to run in your primary datacenter and your US-West VM to run in your secondary datacenter, you could configure the following settings via PowerShell:

Designating a preferred site for the entire cluster will ensure that after a failure that the VMs will start in this location. After you defined your sites by creating a New-ClusterFaultDomain you can use the cluster-wide property PreferredSite to set the default location to launch VMs. Below is the PowerShell cmdlet:

Be aware of your capacity if you are usually distributing your workloads across two sites and they are forced to run in a single location as performance will diminish with less hardware. Consider using the VM prioritization feature and disabling automatic VM restarts after a failure, as this will ensure that only the most important VMs will run. You can find more information from this Altaro blog on how to configure start order priority for clustered VMs.

To summarize, placement priority is based on:

  • Storage affinity
  • Preferred site for a cluster group or VM
  • Preferred site for the entire cluster

Site-Aware Placement Based on Failover Affinity

When site-awareness has been configured for a cluster, there are several automatic failover policies that are enforced behind the scenes. First, a clustered VM or group will always failover to a node, chassis or rack within the same site before it moves to a different site. This is because local failover is always faster than cross-site failover since it can bring the VM online faster by accessing the local disk and avoid any network latency between sites. Similarly, site-awareness is also honored by the cluster when a node is drained for maintenance. The VMs will automatically move to a local node, rather than a cross-site node.

Cluster Shared Volumes (CSV) disks are also site-aware. A single CSV disk can store multiple Hyper-V virtual hard disks while allowing their VMs to run simultaneously on different nodes.  However, it is important that these VMs are all running on nodes within the same site. This is because the CSV service coordinates disk write access across multiple nodes to a single disk. In the case of Storage Spaces Direct (S2D), the disks are mirrored so there are identical copies running in different locations (or sites). If VMs were writing to mirrored CSV disks in different locations and replicating their data without any coordination, it could lead to disk corruption. Microsoft ensures that this problem never occurs by enforcing all VMs which share a CSV disk to run on the local site and write to a single instance of that disk. Furthermore, CSV distributes the VMs across different nodes within the same site, balancing the workloads and write requests to that coordinate node.

Site-Aware Health Checks and Cluster Heartbeats

Advanced cluster administrators may be familiar with cluster heartbeats, which are health checks between cluster nodes. This is the primary way in which cluster nodes validate that their peers are healthy and functioning. The nodes will ping each other once per predefined interval, and if a node does not respond after several attempts it will be considered offline, failed or partitioned from the rest of the cluster. When this happens, the host is not considered an active node in the cluster and it does not provide a vote towards cluster quorum (membership).

If you have configured multiple sites in different physical locations, then you should configure the frequency of these pings (CrossSiteDelay) and the number of health check which can be missed (CrossSiteThreshold) before a node is considered failed. The greater the distance between sites, the more network latency will exist, so these values should be tweaked to minimize the chances of a false failover during times when there is high network traffic. By default, the pings are sent every 1 second (1000 milliseconds) and when 20 are missed, a node is considered unavailable and any workloads it was hosting will be redistributed. You should test your network latency and cross-site resiliency regularly to determine whether you should increase or reduce these default values. Below is an example to change the testing frequency from every 1 second to 5 seconds and the number of missed responses from 20 to 30.

By increasing these values, it will now take longer for a failure to be confirmed and failover to happen resulting in greater downtime. The default time is 1-second x 20 misses = 20 seconds, and this example extends it to 5 seconds x 30 misses = 150 seconds.

Site-Aware Quorum Considerations

Cluster quorum is an algorithm that clusters use to determine whether there are enough active nodes in the cluster to run its core operations. For additional information, check out this series of blogs from Altaro about multi-site cluster quorum configuration.  In a multi-site cluster, quorum becomes complicated since there could be a different number of nodes in each site. With site-aware clusters, “dynamic quorum” will be used to automatically rebalance the number of nodes which have votes. This means that as clusters nodes drop out of membership, the number of voting nodes changes. If there are two sites with an equal number of voting nodes, then the group of nodes that are assigned to be the preferred site will stay online and run the workloads, while the lower priority site will reduce their votes and not host any VMs.

Windows Server 2012 R2 introduced a setting known as the LowerQuorumPriorityNodeID, which allowed you to set a node in a site as the least important, but this was deprecated in Windows Server 2016 and should no longer be used. The idea behind this was to easily declare which location was the least important when there were two sites with the same number of voting nodes. The site with the lower priority node would stay offline while the other partition would run the clustered workloads. That caused some confusion since the setting was only applied to a single host, but you may still see this setting referenced in blogs such as Altaro’s https://www.altaro.com/hyper-v/quorum-microsoft-failover-clusters/.

The site-awareness features added to the latest version of Window Server will greatly enhance a cluster’s resilience through a combination of user-defined policies and automatic actions. By creating the fault domains for clusters, it is easy to provide even greater VM availability by moving the workloads between nodes, chassis, racks, and sites as efficiently as possible. Failover clustering further reduces the configuration overhead by automatically applying best practices to make failover faster and keep your workloads online for longer.

Wrap-Up

Useful information yes? How many of you are using multi-site clusters in your organizations? Are you finding it easy to configure and manage? Having issues? If so, let us know in the comments section below! We’re always looking to see what challenges and successes people in the industry are running into!

Thanks for reading!


Go to Original Article
Author: Symon Perriman

What are the Azure Stack HCI features?

IT shops that want tighter integration between the Windows Server OS and an HCI platform have a few choices in the market, including Azure Stack HCI.

Microsoft offers two similarly named but different offerings. Microsoft markets Azure Stack as a local extension to the cloud, essentially Azure in a box that runs in the data center. The company positions Azure Stack HCI, announced in March 2019, as a highly available, software-defined platform for local VM workload deployments. Organizations can also use Azure Stack HCI to connect to Azure and use its various services, including backup and site recovery.

Azure Stack HCI is fundamentally composed of four layers: hardware, software, management and cloud services.

Who sells the hardware for Azure Stack HCI?

Azure Stack HCI capitalizes on the benefits associated with other HCI offerings, such as high levels of software-driven integration, and common and consistent management. OEM vendors, including Dell, Fujitsu, HPE and Lenovo, sell the Azure Stack HCI hardware that Microsoft validates. The hardware is typically integrated and modular, combining portions of compute, memory, storage and network capacity into each unit.

What OS does Azure Stack HCI use?

The Azure Stack HCI platform runs on the Windows Server 2019 Datacenter edition. Using this server OS provides the familiar Windows environment, but also brings core components of the HCI software stack, including Hyper-V for virtualization, Storage Spaces Direct for storage, and enhanced software-defined networking features in Microsoft’s latest server OS.

How is Azure Stack HCI managed?

Azure Stack HCI capitalizes on the benefits associated with other HCI offerings, such as high levels of software-driven integration, and common and consistent management.

A critical part of an HCI platform is the ability to provision and monitor every element, which means management is a crucial component of Azure Stack HCI. Organizations have several management options such as Windows Admin Center, System Center, PowerShell and numerous third-party tools. Management in Azure Stack HCI emphasizes the use of automation and orchestration, allowing greater speed and autonomy in provisioning and reporting.

What role does the Azure cloud play?

Organizations that purchase Azure Stack HCI have the option to connect to a wide range of Azure services. Some of these services include Azure Site Recovery for high availability and disaster recovery, Azure Monitor for comprehensive monitoring and analytics, Azure Backup for data protection, and Azure File Sync for server synchronization with the cloud.

What’s the primary use for Azure Stack HCI?

When exploring whether to purchase Azure Stack HCI, it’s important to understand its intended purpose. Unlike Azure Stack, Azure Stack HCI is not explicitly designed for use with the Azure cloud. Rather, Azure Stack HCI is an HCI platform tailored for on-premises virtualization for organizations that want to maximize the use of the hardware.

The decision to buy Azure Stack HCI should be based primarily on the same considerations involved with any other HCI system. For example, HCI might be the route to go when replacing aging hardware, optimizing the consolidation of virtualized workloads, and building out efficient edge or remote data center deployments that take up minimal space.

IT decision-makers should view the ability to utilize Azure cloud services that, while useful, are not the primary motivation to use Azure Stack HCI.

Go to Original Article
Author:

For Sale – Eight 8GB server grade DDR3 RAM sticks – suits Mac Pro’s

Specification matching or equivalent to the following:
– Crucial Micron 8GB PC3-10600R (Server) Memory CT102472BB1339 MT36JSF1G72PZ-1G4M1HF

These are working pulls from a MacPro 4,1 (2009) but will work in MacPro 5,1 (2010) as well as PC-based server systems. Purchased second-hand and given a full MEMTEST application run to check and went “live” for a good twelve months or so before being made surplus from further upgrades on the Mac’s.

If you have a compatible dual-processor cMP, then all slots can be filled with these for a full 64GB memory system.

Please note that many systems are fussy about RAMs – and hence you need to match all of the them. The RAMs here are Registered-ECC grade and any non-Registered (but ECC types) will need to be taken out.

Photo’s to follow but not sure if they’re different from stock images in terms of details.

Looking for £15 each with RM 2nd signed-for delivery, discounts on multiple sticks due to bundled delivery. Collection is always possible.

Go to Original Article
Author:

How to work with the WSUS PowerShell module

In many enterprises, you use Windows Server Update Services to centralize and distribute Windows patches to end-user devices and servers.

WSUS is a free service that installs on Windows Server and syncs Windows updates locally. Clients connect to and download patches from the server. Historically, you manage WSUS with a GUI, but with PowerShell and the PoshWSUS community module, you can automate your work with WSUS for more efficiency. This article will cover how to use some of the common cmdlets in the WSUS PowerShell module, found at this link.

Connecting to a WSUS server

The first task to do with PoshWSUS is to connect to an existing WSUS server so you can run cmdlets against it. This is done with the Connect-PSWSUSServer cmdlet. The cmdlet provides the option to make a secure connection, which is normally on port 8531 for SSL.

Connect-PSWSUSServer -WsusServer wsus -Port 8531 -SecureConnection
Name Version PortNumber ServerProtocolVersion
---- ------- ---------- ---------------------
wsus 10.0.14393.2969 8530 1.20

View the WSUS clients

There are various cmdlets used to view WSUS client information. The most apparent is Get-PSWSUSClient, which shows client information such as hostname, group membership, hardware model and operating system type. The example below gets information on a specific machine named Test-1.

Get-PSWSUSClient Test-1 | Select-Object *
ComputerGroup : {Windows 10, All Computers}
UpdateServer : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer
Id : 94a2fc62-ea2e-45b4-97d5-10f5a04d3010
FullDomainName : Test-1
IPAddress : 172.16.48.153
Make : HP
Model : HP EliteDesk 800 G2 SFF
BiosInfo : Microsoft.UpdateServices.Administration.BiosInfo
OSInfo : Microsoft.UpdateServices.Administration.OSInfo
OSArchitecture : AMD64
ClientVersion : 10.0.18362.267
OSFamily : Windows
OSDescription : Windows 10 Enterprise
ComputerRole : Workstation
LastSyncTime : 9/9/2019 12:06:59 PM
LastSyncResult : Succeeded
LastReportedStatusTime : 9/9/2019 12:18:50 PM
LastReportedInventoryTime : 1/1/0001 12:00:00 AM
RequestedTargetGroupName : Windows 10
RequestedTargetGroupNames : {Windows 10}
ComputerTargetGroupIds : {59277231-1773-401f-bf44-2fe09ac02b30, a0a08746-4dbe-4a37-9adf-9e7652c0b421}
ParentServerId : 00000000-0000-0000-0000-000000000000
SyncsFromDownstreamServer : False

WSUS usually organizes machines into groups, such as all Windows 10 machines, to apply update policies. The command below measures the number of machines in a particular group called Windows 10 with the cmdlet Get-PSWSUSClientsinGroup:

Get-PSWSUSClientsInGroup -Name 'Windows 10' | Measure-Object | Select-Object -Property Count
Count
-----
86

How to manage Windows updates

With the WSUS PowerShell module, you can view, approve and decline updates on the WSUS server, a very valuable and powerful feature. The command below finds all the Windows 10 feature updates with the title “Feature update to Windows 10 (business editions).” The output shows various updates on my server for version 1903 in different languages:

Get-PSWSUSUpdate -Update "Feature update to Windows 10 (business editions)"  | Select Title
Title
-----
Feature update to Windows 10 (business editions), version 1903, en-gb x86
Feature update to Windows 10 (business editions), version 1903, en-us arm64
Feature update to Windows 10 (business editions), version 1903, en-gb arm64
Feature update to Windows 10 (business editions), version 1903, en-us x86
Feature update to Windows 10 (business editions), version 1903, en-gb x64
Feature update to Windows 10 (business editions), version 1903, en-us x64

Another great feature of this cmdlet is it shows updates that arrived after a particular date. The following command gives the top-five updates that were downloaded in the last day:

Get-PSWSUSUpdate -FromArrivalDate (Get-Date).AddDays(-1) | Select-Object -First 5
Title KnowledgebaseArticles UpdateType CreationDate UpdateID
----- --------------------- ---------- ------------ --------
Security Update for Microso... {4475607} Software 9/10/2019 10:00:00 AM 4fa99b46-765c-4224-a037-7ab...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM 1e489891-3372-43d8-b262-8c8...
Security Update for Microso... {4475599} Software 9/10/2019 10:00:00 AM 76187d58-e8a6-441f-9275-702...
Security Update for Microso... {4461631} Software 9/10/2019 10:00:00 AM 86bdbd3b-7461-4214-a2ba-244...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM a56d629d-8f09-498f-91e9-572...

The approval and rejection of updates is an important part of managing Windows updates in the enterprise. The WSUS PowerShell module makes this easy to do. A few years ago, Microsoft began releasing preview updates for testing purposes. I typically want to decline these updates to avoid their installation on production machines. The following command finds every update with the string “Preview of” in the title and declines them with the Deny-PSWSUSUpdate cmdlet.

Get-PSWSUSUpdate -Update "Preview of" | Where-Object {$_.IsDeclined -eq 'False' } | Deny-PSWSUSUpdate
Patch IsDeclined
----- ----------
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1 on Windows Server 2008 R2 for Itanium-based Systems (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 and Server 2008 R2 for x64 (KB4512193) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0 on Windows Server 2008 SP2 for Itanium-based Systems (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows Server 2012 for x64 (KB4512194) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 8.1 and Server 2012 R2 for x64 (KB4512195) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 for x64 (KB4512196) True

Syncing WSUS with Microsoft’s servers

In the WSUS GUI, users can set up a daily synchronization between their WSUS server and the Microsoft update servers to download new updates. I like to synchronize more than once a day, especially on Patch Tuesday when you may get several updates in one day. For this reason, you can create a scheduled task that runs a WSUS sync hourly for a few hours per day. The script can be as simple as this command below:

Start-PSWSUSSync
Synchronization has been started on wsus.

Performing cleanups

A WSUS server can be fickle. I have had to rebuild WSUS servers several times, and it is a pretty lengthy process because you have to download all the updates to the new server. You can avoid this process by running a cleanup on the WSUS server. The Start-PSWSUSCleanup cmdlet performs many of these important actions, such as declining superseded updates, cleanup of obsolete updates and removing obsolete computers:

Start-PSWSUSCleanup -DeclineSupersededUpdates -DeclineExpiredUpdates -CleanupObsoleteUpdates -CompressUpdates -CleanupObsoleteComputers -CleanupUnneededContentFiles
Beginning cleanup, this may take some time...
SupersededUpdatesDeclined : 223
ExpiredUpdatesDeclined : 0
ObsoleteUpdatesDeleted : 0
UpdatesCompressed : 4
ObsoleteComputersDeleted : 6
DiskSpaceFreed : 57848478722

Go to Original Article
Author:

Eclipse launches Che 7 IDE for Kubernetes development

SAN FRANCISCO — The Eclipse Foundation has introduced Eclipse Che 7, a new developer workspace server and IDE to help developers build cloud-native, enterprise applications on Kubernetes.

The foundation debuted the new technology at the Oracle Code One conference here. Eclipse Che is essentially a cloud-based IDE built on technology Red Hat acquired from Codenvy, and Red Hat developers are still heavily involved with the Eclipse project. With a focus on Kubernetes, Eclipse Che 7 abstracts away some of the development complexities associated with Kubernetes and helps to close the gap between the development and operations environments, said Mike Milinkovich, executive director of the Eclipse Foundation.

“We think this is important because it’s the first cloud-based IDE that tends to be natively Kubernetes,” he said. “It provides all of the pieces that a cognitive developer needs to be able to build and deploy a Kubernetes application.”

Eclipse Che 7 helps developers who may not be so familiar with Kubernetes by providing not just the IDE, but also its plug-ins and their dependencies. In addition, Che 7 automatically adds all the build and debugging tools developers need for their applications.

Mike MilinkovichMike Milinkovich

“It helps reduce the learning curve that’s related to Kubernetes that a lot of developers struggle with, in terms of setting up Kubernetes and getting their first apps locations up and running on Kubernetes,” Milinkovich said.

The technology can be deployed on a public Kubernetes cluster or an on-premises data center, and it provides centrally hosted private developer workspaces. In addition, the Eclipse Che IDE is based on an extended version of Eclipse Theia that provides an in-browser experience like Microsoft’s Visual Studio Code, Milinkovich said.

Eclipse Che and Eclipse Theia are part of cloud-native offerings from vendors such as Google, IBM and Broadcom. And it lies at the core of Red Hat CodeReady Workspaces, a development for Red Hat OpenShift.

Moreover, Broadcom’s CA Brightside product uses Eclipse Che to bring a modern, open approach to the mainframe platform. Che also integrates with IBM Codewind to provide a low barrier to entry for developing in a production container environment.

Kubernetes is hard to manage, so it will be helpful to have an out-of-the-box offering from an IDE vendor.
Holger MuellerAnalyst, Constellation Research

“It had to happen, and it happened sooner than later: The first IDE delivered inside Kubernetes,” said Holger Mueller, an analyst at Constellation Research.

There are benefits of having developers build software with the same mechanics and platforms on the IDE side as their target production environment, he explained, including similar experience and faster code deployments.

“And Kubernetes is hard to manage, so it will be helpful to have an out-of-the-box offering from an IDE vendor,” Mueller said. “But nothing beats the advantage of being able to standardize and quickly launch uniform and consistent developer environments. This gives development team scale to build their next-gen applications and helps their enterprise accelerate.”

Eclipse joins a group that includes major vendors that want to limit the complexity of Kubernetes. IBM and VMware recently introduced technology to reduce Kubernetes complexity for developers and operations staff.

For instance, IBM’s Kabanero open source project to simplify development and deployment of apps on Kubernetes uses Che as its hosted IDE.

The future of developer tools will be cloud-based, Milinkovich said. “Because of the complexity of the application scenarios today, developers are spending a lot of their time and energy building out development environments when they could just move developer workspaces into containers,” he said. “It’s far easier to update the entire development team to new runtime requirements. And you can push out new tools across the entire development team.”

The IDE is the last big piece of technology that developers use on a daily basis that has not moved into the cloud, so moving the IDE into the cloud is the next logical step, Milinkovich said.

Go to Original Article
Author:

Are you ready for the Exchange 2010 end of life?

Exchange Server 2010 end of life is approaching — do you have your migration plan plotted out yet?

Exchange Server 2010 reached general availability on November 9, 2009, and has been the cornerstone of the collaboration strategy for many organizations over the last decade. Since that time, Microsoft also produced three releases of Exchange Server, with Exchange Server 2019 being the most recent. Exchange Server 2010 continues to serve the needs of many organizations, but they must look to migrate from this platform when support ends on January 14, 2020.

What exactly does end of support mean for existing Exchange Server 2010 deployments? Your Exchange 2010 servers will continue to operate with full functionality after this date; however, Microsoft will no longer provide technical support for the product. In addition, bug fixes, security patches and time zone updates will no longer be provided after the end-of-support date. If you haven’t already started your migration from Exchange Server 2010, now is the time to start by seeing what your options are.

Exchange Online

For many, Exchange Online — part of Microsoft Office 365 — is the natural replacement for Exchange Server 2010. This is my preferred option.

The cloud isn’t for everyone, but in many instances the reasons organizations cite for not considering the cloud are based on perception or outdated information, not reality.

A hybrid migration to Exchange Online is the quickest way to migrate to the latest version of Exchange that is managed by Microsoft. Smaller organizations may not need the complexity of this hybrid setup, so they may want to investigate simpler migration options. Not sure which migration option is best for you? Microsoft has some great guidance to help you decide on the best migration path.

The cloud isn’t for everyone, but in many instances the reasons organizations cite for not considering the cloud are based on perception or outdated information, not reality. I often hear the word “compliance” as a reason for not considering the cloud. If this is your situation, you should first study the compliance offerings on the Microsoft Trust Center. Microsoft Office 365 fulfills many industry standards and regulations, both regionally and globally.

If you decide to remain on premises with your email, you also have options. But the choice might not be as obvious as you think.

Staying with Exchange on premises

Exchange Server 2019 might seem like the clear choice for organizations that want to remain on premises, but there are a few reasons why this may not be the case.

Migrating from Exchange 2010 to Exchange 2016

First, there is no direct upgrade path from Exchange Server 2010 to Exchange Server 2019. For most organizations, this migration path involves a complex multi-hop migration. You first migrate all mailboxes and resources to Exchange Server 2016, then you decommission all remnants of Exchange Server 2010. You then perform another migration from Exchange Server 2016 to Exchange Server 2019 to finalize the process. This procedure involves significant resources, time and planning.

Another consideration with Exchange Server 2019 is licensing. Exchange Server 2019 is only available to volume license customers via the Volume Licensing Service Center. This could be problematic for smaller organizations without this type of agreement.

Organizations that use the unified messaging feature in Exchange Server 2010 have an additional caveat to consider: Microsoft removed the feature from Exchange Server 2019 and recommends Skype for Business Cloud Voicemail instead.

For those looking to remain on premises, Exchange Server 2019 has some great new features, but it is important to weigh the benefits against the drawbacks, and the effort involved with the migration process.

Microsoft only supports Exchange Server 2019 on Windows Server 2019. For the first time, the company supports Server Core deployments and is the recommended deployment option. In addition, Microsoft made it easier to control external access to the Exchange admin center and the Exchange Management Shell with client access rules.

Microsoft made several key improvements in Exchange Server 2019. It rebuilt the search infrastructure to improve indexing of larger files and search performance. The company says the new search architecture will decrease database failover times. The MetaCacheDatabase feature increases the overall performance of the database engine and allows it to work with the latest storage hardware, including larger disks and SSDs.

There are some new features on the client side as well. Email address internationalization allows support for email addresses that contain non-English characters. Some clever calendar improvements include “do not forward” work without the need for an information rights management deployment and the option to cancel/decline meetings that occur while you’re out of office.

What happens if the benefits of upgrading to Exchange Server 2019 don’t outweigh the drawbacks of the migration process? Exchange Server 2016 extended support runs through October 2025, making it a great option for those looking to migrate from Exchange Server 2010 and stay in support. The simpler migration process and support for unified messaging makes Exchange Server 2016 an option worth considering.

Go to Original Article
Author: