Tag Archives: Part

Microsoft plugs 2 zero-days on August Patch Tuesday

Microsoft shut down two zero-days, including one that had been publicly disclosed, as part of its security update releases for August Patch Tuesday.

In total, Microsoft addressed 120 unique vulnerabilities, with 17 of those rated critical. Technologies and products with security updates this month include both the HTML- and Chromium-based Microsoft Edge browsers, Microsoft’s ChakraCore JavaScript engine, Internet Explorer, Microsoft Scripting Engine, SQL Server, Microsoft JET Database Engine, .NET Framework, ASP.NET Core, Microsoft Office and Microsoft Office Services and Web Apps, Microsoft Windows Codecs Library and Microsoft Dynamics. This month marks the sixth in a row Microsoft addressed more than 100 unique vulnerabilities in its monthly security updates package.

Microsoft terminates two zero-days

One zero-day (CVE-2020-1464) fixed by the August Patch Tuesday releases is a Windows spoofing vulnerability rated important that would allow an attacker to sidestep the OS security features and load an improperly signed file. This bug affects all supported versions of Windows — as well as Windows 7 and Windows 2008/2008 R2 for customers who paid for Extended Security Update (ESU) licenses for continued support of these systems that reached end of life in January. A bug that allows a malicious actor to bypass this security feature could open the door to put malicious files on a Windows system.  

“Typically, files get signed by a trusted vendor, and that signature validation is critically important to a lot of security mechanisms,” said Chris Goettl, director of product management and security at Ivanti, a security and IT management vendor based in South Jordan, Utah. “The fact that an attacker can bypass that means that they can introduce improperly validated malicious files to the operating system, and technologies that should be able to validate based on signature might be able to be tricked because of this.”

Chris Goettl, director of product management and security, IvantiChris Goettl

Microsoft’s notes on this CVE lacked the usual details about potential attack scenarios, which seems to indicate an attacker would have some additional hurdles to take advantage of the flaw. This might be why a Windows zero-day also got a relatively low CVSS base score of 5.3.

“The attacker would need to execute an asset that is improperly signed, so it’s not something they can just send to somebody. Microsoft doesn’t really get into details about how some attacker might be able to take advantage of that,” Goettl said.

The second zero-day (CVE-2020-1380) is remote code execution vulnerability in the Microsoft Scripting Engine used in Internet Explorer 11 rated critical in Windows desktop systems and moderate on Windows Server 2008 R2, Windows Server 2012 and 2012 R2. Because the Microsoft Scripting Engine is also used in Microsoft Office, which widens the attack vector for this vulnerability.

“The vulnerability could be exploited a couple of different ways: by setting up a specially crafted website via advertisements that may be compromised, or it could be loaded up using an application or an Office document that uses the IE rendering,” Goettl said.

Windows Server hit by domain controller bug

Microsoft provided a lengthy description for handling CVE-2020-1472, a critical Netlogon elevation-of-privilege flaw affecting supported Windows Server OSes, including Windows Server 2008 and 2008 R2 for ESU customers. On an unpatched domain controller — the Active Directory component tasked with managing security authentication requests — an attacker could acquire domain administrator access without needing system credentials. 

Microsoft said it is using a “phased two-part rollout” to patch the bug with the first part of the deployment executed in the August Patch Tuesday security update.

“The updates will enable the [domain controllers] to protect Windows devices by default, log events for non-compliant device discovery, and have the option to enable protection for all domain-joined devices with explicit exceptions,” according to the CVE instructions.

Microsoft plans the second phase on February Patch Tuesday in 2021, which it calls “the transition into the enforcement phase.

“The [domain controllers] will be placed in enforcement mode, which requires all Windows and non-Windows devices to use secure Remote Procedure Call (RPC) with Netlogon secure channel or to explicitly allow the account by adding an exception for any non-compliant device,” the company wrote.

Goettl said administrators should begin testing the patch in a lab and testing it before the hard enforcement occurs, which requires all domain controllers — even those in read-only mode — to be updated. Microsoft provided further guidance in the support documentation at this link

Other notable corrections from August Patch Tuesday

  • Microsoft Outlook has two CVEs this month. CVE-2020-1483 is a memory-corruption vulnerability rated critical that could let an attacker run arbitrary code in the context of the current user using several different attack vectors, including the preview pane. CVE-2020-1493 is an information-disclosure vulnerability rated important that could let an attacker view a restricted file from the preview pane by sending it as a file attachment.
  • CVE-2020-1455 is a Microsoft SQL Server Management Studio denial-of-service vulnerability rated important that, if exploited, could let an attacker disrupt the use of the application.
  • The .NET Framework has two CVEs. CVE-2020-1046 is a critical remote code execution vulnerability that an attacker could use to control the unpatched system using a specially crafted file. CVE-2020-1476 is an important elevation-of-privilege vulnerability in ASP.NET or .NET web applications running on IIS that could let an attacker access restricted files.
  • Microsoft resolved an elevation-of-privilege vulnerability (CVE-2020-1337) rated important for supported Windows systems on both the client and server side. The patch resolved a lingering printer spooler issue that had been patched multiple times — most recently in May — but security researchers found a way to bypass the patch and gave a recent Black Hat USA presentation on the flaw, which has its origins in the Stuxnet worm from 2010. Despite public knowledge of the bug, Microsft’s CVE did not report this as publicly disclosed.

Go to Original Article
Author:

Microsoft plugs 2 zero-days on August Patch Tuesday

Microsoft shut down two zero-days, including one that had been publicly disclosed, as part of its security update releases for August Patch Tuesday.

In total, Microsoft addressed 120 unique vulnerabilities, with 17 of those rated critical. Technologies and products with security updates this month include both the HTML- and Chromium-based Microsoft Edge browsers, Microsoft’s ChakraCore JavaScript engine, Internet Explorer, Microsoft Scripting Engine, SQL Server, Microsoft JET Database Engine, .NET Framework, ASP.NET Core, Microsoft Office and Microsoft Office Services and Web Apps, Microsoft Windows Codecs Library and Microsoft Dynamics. This month marks the sixth in a row Microsoft addressed more than 100 unique vulnerabilities in its monthly security updates package.

Microsoft terminates two zero-days

One zero-day (CVE-2020-1464) fixed by the August Patch Tuesday releases is a Windows spoofing vulnerability rated important that would allow an attacker to sidestep the OS security features and load an improperly signed file. This bug affects all supported versions of Windows — as well as Windows 7 and Windows 2008/2008 R2 for customers who paid for Extended Security Update (ESU) licenses for continued support of these systems that reached end of life in January. A bug that allows a malicious actor to bypass this security feature could open the door to put malicious files on a Windows system.  

“Typically, files get signed by a trusted vendor, and that signature validation is critically important to a lot of security mechanisms,” said Chris Goettl, director of product management and security at Ivanti, a security and IT management vendor based in South Jordan, Utah. “The fact that an attacker can bypass that means that they can introduce improperly validated malicious files to the operating system, and technologies that should be able to validate based on signature might be able to be tricked because of this.”

Chris Goettl, director of product management and security, IvantiChris Goettl

Microsoft’s notes on this CVE lacked the usual details about potential attack scenarios, which seems to indicate an attacker would have some additional hurdles to take advantage of the flaw. This might be why a Windows zero-day also got a relatively low CVSS base score of 5.3.

“The attacker would need to execute an asset that is improperly signed, so it’s not something they can just send to somebody. Microsoft doesn’t really get into details about how some attacker might be able to take advantage of that,” Goettl said.

The second zero-day (CVE-2020-1380) is remote code execution vulnerability in the Microsoft Scripting Engine used in Internet Explorer 11 rated critical in Windows desktop systems and moderate on Windows Server 2008 R2, Windows Server 2012 and 2012 R2. Because the Microsoft Scripting Engine is also used in Microsoft Office, which widens the attack vector for this vulnerability.

“The vulnerability could be exploited a couple of different ways: by setting up a specially crafted website via advertisements that may be compromised, or it could be loaded up using an application or an Office document that uses the IE rendering,” Goettl said.

Windows Server hit by domain controller bug

Microsoft provided a lengthy description for handling CVE-2020-1472, a critical Netlogon elevation-of-privilege flaw affecting supported Windows Server OSes, including Windows Server 2008 and 2008 R2 for ESU customers. On an unpatched domain controller — the Active Directory component tasked with managing security authentication requests — an attacker could acquire domain administrator access without needing system credentials. 

Microsoft said it is using a “phased two-part rollout” to patch the bug with the first part of the deployment executed in the August Patch Tuesday security update.

“The updates will enable the [domain controllers] to protect Windows devices by default, log events for non-compliant device discovery, and have the option to enable protection for all domain-joined devices with explicit exceptions,” according to the CVE instructions.

Microsoft plans the second phase on February Patch Tuesday in 2021, which it calls “the transition into the enforcement phase.

“The [domain controllers] will be placed in enforcement mode, which requires all Windows and non-Windows devices to use secure Remote Procedure Call (RPC) with Netlogon secure channel or to explicitly allow the account by adding an exception for any non-compliant device,” the company wrote.

Goettl said administrators should begin testing the patch in a lab and testing it before the hard enforcement occurs, which requires all domain controllers — even those in read-only mode — to be updated. Microsoft provided further guidance in the support documentation at this link

Other notable corrections from August Patch Tuesday

  • Microsoft Outlook has two CVEs this month. CVE-2020-1483 is a memory-corruption vulnerability rated critical that could let an attacker run arbitrary code in the context of the current user using several different attack vectors, including the preview pane. CVE-2020-1493 is an information-disclosure vulnerability rated important that could let an attacker view a restricted file from the preview pane by sending it as a file attachment.
  • CVE-2020-1455 is a Microsoft SQL Server Management Studio denial-of-service vulnerability rated important that, if exploited, could let an attacker disrupt the use of the application.
  • The .NET Framework has two CVEs. CVE-2020-1046 is a critical remote code execution vulnerability that an attacker could use to control the unpatched system using a specially crafted file. CVE-2020-1476 is an important elevation-of-privilege vulnerability in ASP.NET or .NET web applications running on IIS that could let an attacker access restricted files.
  • Microsoft resolved an elevation-of-privilege vulnerability (CVE-2020-1337) rated important for supported Windows systems on both the client and server side. The patch resolved a lingering printer spooler issue that had been patched multiple times — most recently in May — but security researchers found a way to bypass the patch and gave a recent Black Hat USA presentation on the flaw, which has its origins in the Stuxnet worm from 2010. Despite public knowledge of the bug, Microsft’s CVE did not report this as publicly disclosed.

Go to Original Article
Author:

Healthcare CISO offers alternatives to ‘snake oil’ companies

Indiana University Health CISO Mitchell Parker believes part of the reason hospitals and medical facilities are hacked so frequently is that they’re falling for “snake oil companies” that fail to improve security postures.

In a Black Hat USA 2020 session, titled “Stopping Snake Oil with Smaller Healthcare Providers: Addressing Security with Actionable Plans and Maximum Value,” Parker discussed his experiences working with several different healthcare organizations, which were spending their limited security budgets on the wrong things. The session warned of snake oil vendors, or, as Parker said, companies “that have only provided risk assessments, that cost a lot of these smaller providers tens of thousands of dollars, and don’t deliver anything of value. And worse, taking money out of the risk management plans, A.K.A. negative value.”

A healthcare CISO can’t afford to waste money on those kinds of companies and get bad advice on how to better protect their organization, Parker said.

“Healthcare has been the most affected industry by ransomware, data breaches and hacks. I take a look on the news every week — there’s yet another provider that’s been hacked. In a lot of cases, providers have had to shut down, and patients were not even able to get hold of their medical records,” Parker said. “And so what we’ve noticed in our work is the guidance provided to many providers has not addressed what organizations actually need to do to protect their patients and themselves.”

The healthcare industry has long been inundated with cyber attacks, from ransomware infections to data breaches. Despite some ransomware groups publicly pledging to not attack hospitals or medical facilities during the COVID-19 pandemic, many security experts say healthcare is still one of the widely attacked industries.

Healthcare Black Hat 2020
Indiana University Health CISO Mitchell Parker discusses risk assessments at his Black Hat 2020 session.

“We know for a fact the healthcare organization is the most highly targeted, in general,” Maya Levine, security engineer at Check Point Software Technologies, said. “Healthcare organizations are always being targeted for a really terrible reason: It’s an incredible disruption to business and the livelihood of people.”

During a live Q&A following the presentation, SearchSecurity asked Parker what he considered to be warning signs for snake oil vendors or risk management firms. “If someone says they can solve all of your problems instantly, or if they offer solutions without analyzing your systems, then watch out,” he said.

Security advice for healthcare CISOs

As part of the presentation’s advice to organizations, Park said healthcare organizations, especially smaller hospitals and medical facilities, should employ cloud-based backups and conduct risk assessments internally — with some outside help — to protect against ransomware.

“We always advocate doing [risk assessments] internally with a little bit of outside help instead of just having one done by an outside firm. The reason why is you need to know your business well and where your holes are,” Parker said.

He recommended healthcare organizations adopt password managers to better protect accounts and credentials. “You need to get very good with password managers to make sure your team knows how to use them” because “no other industry I’ve worked in has had an emphasis on having numerous incompatible logins,” Parker said.

This can work in tandem with two-factor authentication. “If there’s one factor to stop the majority of hack attempts, it’s good two-factor authentication like Duo, Authy or YubiKeys — all of which work very, very well,” he said. This is especially true for healthcare, Parker argued, because numerous phishing attacks in healthcare use compromised accounts to conduct their attacks.

He also discussed the dangers of remote desktop protocol (RDP) and VPNs.

“One of the most important lessons we learned over the past year is that remote access is a huge target and there’s actually been numerous successful attacks on both remote desktop and unpatched VPN software,” Parker said. “I’ll be very clear about something else. Straight remote desktop is not effective. You will get owned, BlueKeep or not. You will get owned.”

Cloud backups are critical, Parker said, mainly to ensure quicker recovery time from cyber attacks. In addition to simply having cloud backups, he recommended organizations protect their backups, test their backups and store backups separately under different credentials.

In one of the session’s final points, he discussed the value of endpoint detection and response (EDR) over antivirus. Not only does it integrate better with SIEMs and log management technology, but antivirus just doesn’t work for the healthcare tech environment.

“When it comes to EDR, I’ll be very clear about something else. Antivirus in healthcare is DOA. Why? Because there’s too much interference with applications and most of your software packages out there in healthcare that run come with an exceptions list, which means that it takes me about 30 seconds on Google to find the exceptions list to know where I can place custom malware. So you want to get rid of the ability for the exceptions to run. That’s why we like EDR better.”

Parker also offered recommendations to healthcare organizations for physical, EHR maintenance, ZFS management and more.

Lastly, Parker encouraged healthcare organizations to vet their medical device vendors and offered IU Health’s own baselines and requirements for information security and vendor management for other companies to use. “You need to perform a risk analysis of your vendors,” he said. “Everyone from small providers to some of the largest healthcare systems in the world use ours.”

Go to Original Article
Author:

Why You Should Use OneDrive for Business

As part of your organization’s journey to the cloud / digital transformation, document storage is key. OneDrive for Business (OD4B) replaces the traditional local “Documents” folder and opens up access to work documents from anywhere, on any device, along with many other capabilities.

This article will look at what OneDrive for Business is, how it compares with personal OneDrive, how to use OD4B, protecting your files and sharing them with others securely and some tips for Microsoft 365 administrators managing OD4B for a business. If you’d like an overview on how to use OneDrive for Business I’ve made the video below which accompanies this article:

[embedded content]

The Basics of OneDrive for Business

OD4B is SharePoint based cloud storage that you license as part of Office / Microsoft 365 that gives each user 1 TB of storage for their documents. You can access these documents from any Windows (the client is built into Windows 10, 1709, or later, but also available for earlier versions) or Mac computer, as well as through apps for Android and iOS. You can also access OD4B in any web browser, one easy way to get there is to log in at www.office.com and clicking on the OneDrive icon.

OD4B in Office.com

OD4B in Office.com

Alternatively, you can right-click on the folder in Windows Explorer on your desktop and select View online.

Right click on OD4B in Windows Explorer

Right-click on OD4B in Windows Explorer

Either way, you end up in the web interface where you can create new Office documents, upload files or folders, sync the content between your machine and the cloud storage (see below) as well as create automation flows through Power Automate.

OD4B web interface

OD4B web interface

Note that if you click on an Office file in the web interface, it’ll open in the web-based version of Word, giving you the option of working on any device where you have access to a browser.

For most people, 1 TB of storage is sufficient but many modern devices don’t come with that amount of internal storage so you may need to choose what to sync to the local device. There are two approaches, you can right-click on a folder or file and select Always keep on this device which will do exactly that (and take up space on your local PC), or Free up space which will delete the local copy but keep the files in the cloud. You can tell the different states with the filled green tick (always on this device) icon, or the white cloud (space freed up). The automatic way is to simply double-click on a file that you need to work on, and the file will be downloaded (green tick on white background), called Available locally, this feature is called Files on demand.

In Windows, there’s also a handy “pop up” menu to see the status of OD4B, see which files have been recently synced, and also lets you pause syncing temporarily.

Pop up menu from OD4B client

Pop up menu from OD4B client

If you’re working in Word, Excel, PowerPoint in both Windows and Mac on a file stored in OD4B (and OneDrive personal / SharePoint Online) it’ll AutoSave your changes without you having to save manually. OD4B will also become the default save location in Word, Excel, etc.

And the “secret” is that OD4B is a just a personal document library in SharePoint Online, managed by the OD4B service.

Choosing syncing options for folders.png

Choosing syncing options for folders

OneDrive versus OneDrive for Business

If you sign up for a free Microsoft account, you get the personal flavor of OneDrive which provides 5GB of storage. You can augment this with a Microsoft 365 personal (1 person) or Home (up to 6 users) subscription providing up to 1TB of storage per user, as well as Office for your PC or Mac.

From an end-user point of view the services are very similar but the business version adds identity federation, administrative control, Data Loss Prevention (DLP), and eDiscovery.

Advanced Features

OD4B provides quite a few advanced features that the casual user might not know about. For instance, when you’re attaching a document to an email, you’ll have the option to attach a link to the document in your OD4B instead of a copy of it. If you’re emailing the document to someone internally in your business or someone externally that you collaborate with, this is a better option as you’ll both still be working on the one file (potentially at the same time, see below) rather than having multiple copies attached to different emails and ending up having to manually reconcile the edits at the end.

Known Folder Move is another feature that you can enable as an administrator. This will redirect the Desktop, Documents, Pictures, Screenshots and Camera Roll folders from a user’s local device to OD4B. This has two benefits; firstly, if a user loses their device or it’s broken, their files will still be there when they log in on a new device, secondly, they can use their local Documents, Pictures, etc. folders as they always have.

There’s also versioning built into OD4B which keeps track of each version as it’s saved, you can access this either in the web interface or by right-clicking on a file in Windows Explorer.

OD4B document versions

OD4B document versions

The Recycle bin in the web UI for OD4B has saved many an IT Pro’s career when the CEO has deleted (“by mistake” – but they swear they never hit delete) an important file. Simply click on the Recycle bin and restore files that were deleted up to 93 days ago (up to 30 days for OneDrive personal). A related feature is OneDrive Restore that lets you recover an entire (or parts of) OD4B, perhaps after all the files have been encrypted by a ransomware attack. It also shows a histogram of versions for each file, making it easy to spot the version you want to restore.

Using AI, OD4B (and SharePoint) will automatically extract text from photos that you store so that you can use it when searching for files, it’ll also automatically provide a transcript for any audio or video file you store. File insights let you see who has viewed and edited a shared file (see below) and get statistics.

If you’re using the app on your smartphone you can scan the physical world (a whiteboard, a document, business card, or photo) with the camera and it’ll use AI to transcribe the capture.

Scanning in the Android app

Scanning in the Android app

Recently, Microsoft added a new feature called Add to OneDrive that lets you add a shortcut in OD4B to folders that others have shared with you or that are shared with you in Teams or SharePoint. Speaking of Teams – sharing files in there will now use the same sharing links functionality that OD4B uses (see below). Even more useful will be the forthcoming ability to move a folder and keep the sharing permissions you have configured for it, and some files (CAD drawings anyone?) the increase of the maximum file size from 15 GB to 100 GB is welcome. And, like all the other cool kids, OD4B (and OneDrive personal) on the web will add a dark theme option.

Collaboration and OneDrive for Business

One of the powerful features of OD4B is the ability to share documents (and folders) with internal and external users. As you might expect, administrators have full control over sharing options (see below) but assuming it’s not turned off or restricted you can right-click on a file or folder and click the blue cloud icon Share option, or click the Share option in the web interface. This lets you share a link to the file or folder with internal and external users, grant access to specific people, make it read-only or allow editing and block the ability to download the document (they have to edit the online, shared copy).

Sharing a file, One Drive For Business

Sharing a file

It’s a good idea to turn on external sharing notifications via email.

Once a document is shared you can also use Co-authoring to work on the document simultaneously, both in the web-based versions of Word and Excel as well as the desktop versions of the Office apps. You can see which parts of a document another user is working on.

Administration

If you’re the administrator for your Office 365 deployment you can access the SharePoint admin center (from the main Microsoft 365 Admin center) and control sharing for both OneDrive and SharePoint. There is also a link to the OneDrive admin center where you have control over other aspects of OD4B as well as the same sharing settings.

Sharing Settings in OD4B Admin Center

Sharing Settings in OD4B Admin Center

The main settings for you to consider here are who your users can share content with. The most permissive setting allows them to share links to documents with anyone, no authentication required (not recommended). The next level up allows your users to invite external users to the organization but they have to sign in (using the same email address that the sharing link was sent to), creating an external user in your Azure Active Directory and thus giving you some control, including the ability to apply Conditional Access to their access. If you only allow sharing with existing external users, you must have another process in place for how to invite external users. And the most restrictive is to only allow sharing with internal users, blocking external sharing. Don’t be fooled by these sliders however, if you set this too restrictive and users need to share documents externally, they will do so using personal email, other cloud storage solutions, etc. They will just not be using OD4B sharing links which at least allows you visibility in audit logs and reports, along with some control.

Under the advanced settings for the links you can configure link expiry in days, prohibiting links that last “forever”. You can also limit links to be view only. The advanced settings for sharing let you black or whitelist particular domains for sharing, preventing further sharing (an external user sharing with another external user) and letting owners see who is viewing their files.

Under Sync you can limit syncing to domain-joined computers and block specific file types. Storage lets you limit the storage quota and set the number of days that OD4B content is kept after a user account is deleted. Device access lets you limit access based on IP address as well as set some restrictions for the mobile apps, whereas the Compliance blade has links to DLP, Retention, eDiscovery, Alerts, and Auditing, all of which are generic Office 365 features. The next blade, Notifications, controls email notifications for sharing and the last blade, while Data migration is a link to an article with tools for migrating to OD4B from on-premises storage.

If you’re considering OD4B, there are handy deployment and administration guides for administrators, both for Enterprises and Small businesses. If, on the other hand, your business is definite about keeping “stuff” on-premises you can use OneDrive with SharePoint server, including 2019.

Note that a recent announcement means that the OD4B admin center functionality will move into the SharePoint Online admin center, but the above functionality will stay intact, just not in a separate portal.

Conclusion

There’s no doubt that cloud storage is a cornerstone of successful digital transformation and if you’re already using Office 365, OneDrive for Business is definitely the best option.

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now


Go to Original Article
Author: Paul Schnackenburg

Informatica expands enterprise data catalog capabilities

Informatica expanded its Intelligent Data Platform with new capabilities, thanks in part to the acquisition of data governance vendor Compact Solutions.

Compact Solutions, based in Oakbrook Terrace, Ill., has a portfolio of metadata management capabilities that Informatica will now use to enhance the Informatica Enterprise Data Catalog.

Informatica revealed the acquisition July 2, two days after the enterprise cloud data management vendor launched the summer update of its Intelligent Data Platform, which includes new data privacy analytics, data governance and enterprise data catalog features.

Informatica, which is privately held, did not disclose the price of the acquisition.

The Compact Solutions acquisition will give Informatica the ability to manage more types of metadata, said Mark Beyer, a Gartner analyst. Having more metadata types could serve as an on-ramp for potential new customers, as well as an integration pathway to continue coexisting with competitive products to Informatica’s, Beyer added.

“One of the primary focal points for the expanding metadata management market is the capability to assure continuous acquisition of metadata from external assets,” Beyer said.

Screenshot of Informatica Enterprise Data Catalog
Informatica Enterprise Data Catalog can use different scanners to identify metadata types.

Beyer noted that Gartner clients have expressed that data integration should be automated much more based on metadata analytics. Similarly, Gartner has found that organizations are seeking to combine multiple data integration platforms into a more consolidated approach, he said.

“Gartner clients generally indicate they believe they will need a low-cost solution for more traditional data integration needs, and more powerful platform for the more complex integration demands,” Beyer said.

Informatica’s growing Enterprise Data Catalog

Compact Solutions was an Informatica partner before the acquisition. So some of Compact Solutions’ tools are already deployed with joint customers, noted Jitesh Ghai, senior vice president and general manager of data management at Informatica.

One of the primary focal points for the expanding metadata management market is the capability to assure continuous acquisition of metadata from external assets.
Mark BeyerAnalyst, Gartner

Informatica plans to release more updates of Compact Solutions’ advanced metadata scanners in the next month to three months, Ghai said.

The Compact Solutions purchase complements Informatica’s existing data governance and data cataloging portfolio. 

“Compact Solutions’ engineering team has deep expertise in extracting metadata from some of the most complex systems including enterprise data warehouses, mainframes, third-party ETL tools and more,” Ghai said.

Data Asset Analytics brings data value to enterprise data catalog

As part of Informatica’s summer update, released on June 30, Informatica introduced a Data Asset Analytics (DAA) capability that helps measure data use in an organization.

Informatica is adding DAA to the Informatica Enterprise Data Catalog. An enterprise data catalog has a number of different use cases, including helping organizations gain better understanding of data assets.

The DAA feature provides insight to organizations on how they are using their data. Determining data value is a calculation that involves having an inventory of all data assets, measuring the amount of collaboration on a given data asset and how widely the data asset is used, Ghai said.

“Data Asset Analytics really is about enabling data executives to articulate the contribution and value of data to the business,” Ghai said.

Data privacy gets a dashboard

Another element that is part of the Informatica summer update is a new privacy analytics dashboard feature in the Data Privacy Management (DPM) component of Informatica’s platform.

DPM provides governance and controls over data to help comply with enterprise and regulatory policies. The new privacy analytics dashboard shows a risk profile of an organization’s sensitive information, including statistics on how much private information the organization is holding.

With privacy regulation such as GDPR and CCPA, organizations need to proactively ensure their privacy checks are in place, Ghai said.

“We are able to scan, discover, classify and categorize data and surface all of that up into a dashboard to help privacy teams prioritize what they will work on next, to preserve compliance or work toward demonstration of compliance,” Ghai said.

Go to Original Article
Author:

Moogsoft AIOps platform taps WWT in partner initiative

Moogsoft Inc., an AIOps platform provider, is partnering with World Wide Technology Inc. as part of its plan to work with channel partners that can help transform customers’ network operations centers.

World Wide Technology (WWT), a technology solution provider based in St. Louis, will use Moogsoft’s technology within its AIOps practice. WWT joins other Moogsoft partners such as Windward Consulting Group, a Herndon, Va., company that recently rolled out managed services around the Moogsoft platform.

Terry Ramos, Moogsoft’s senior vice president of alliances and channel, joined the company in February 2020. Since then, the company’s channel program has aimed to recruit partners able to help customers through an AIOps transformation.

“We’re focusing on a small number of partners who understand the transformation of taking hundreds of thousands of events and narrowing that down to a set number of situations that a customer really needs to focus on and get resolved,” Ramos said.

IT service providers have recently become more active in AIOps and intelligent operations. Companies building practices in those areas assess customers’ environments, suggest strategies for building on what they have already and help them integrate new tools.

AIOps use cases chart

The Moogsoft AIOps platform integrates with monitoring tools to ingest event data, provides noise reduction and ties together similar alerts into what the company terms “situations.” Moogsoft’s machine learning technology identifies the probable root cause of a situation, which network operations center (NOC) or security operations center personnel can then resolve.

Regarding integrations, Moogsoft will operate alongside Cisco’s AppDynamics in WWT’s AIOps practice. “We have a partnership with AppDynamics, as well,” Ramos noted. Moogsoft ingests monitoring and observability data from AppDynamics, according to Moogsoft.

Moogsoft, meanwhile, is promoting the concept of the virtual NOC, which lets ITOps and DevOps groups collaborate outside of a physical facility. Moogsoft Enterprise 8.0, based on the Moogsoft AIOps Platform, includes a Situation Room that lets personnel collaborate remotely.

Wipro launches channel partner program

Wipro Ltd. will collaborate with channel partners as it looks to accelerate the adoption of its products and platforms, which span areas from AI to virtual desktops.

The company’s newly launched Global Channel Partner Program aims to facilitate relationships with IT services providers, products companies, consulting firms and resellers. Mandar Vanarse, general manager of the intellectual property unit at Wipro, said the company offers 60 industry-specific platforms and products as well as “industry-agnostic” technology offerings.

The company’s portfolio also includes the Wipro Holmes AI and automation platform, Wipro’s VirtuaDesk desktop-as-a-service offering and the Open Banking API platform.

“We are open to our partners selling … one or several of the products from our portfolio,” Vanarse said. He added channel partners can sell Wipro’s products “as is” or as a joint offering with their own products.

Wipro channel partner firms will target the enterprise market segment, which is also Wipro’s primary segment. Vanarse said Wipro will avoid channel conflict with a “well-defined approach, which will be based on market, segment and accounts, which will help align our efforts to expand coverage and deployment.”

Wipro will also offer deal registration through the program. “Channel partners will register their leads with Wipro,” Vanarse said. “Wipro, in turn, will validate the leads and qualify the partner to pursue the lead.”

According to Wipro, the company’s channel partner staff will receive training at the Wipro Product Academy and have access to “special pricing models, sales enablement support, benefit calculators, and other sales and marketing material.”

Other news

  • ServiceNow updated its partner initiative with a partner marketplace and an app monetization program. The company unveiled the ServiceNow Partner Industry Solutions marketplace, which features partner offerings that address joint customers’ industry-specific workflow and digital transformation needs. The initial group of partners offering the industry solutions includes Accenture, Atos, Deloitte, DXC Technology, Ernst & Young and KPMG. ServiceNow also launched the Built on Now program, which provides a framework for app monetization. The company said the framework lets partners build, test, certify, distribute and sell digital workflows on the Now Platform. A year ago, ServiceNow said it would heavily invest in its partners as it pursues its goal of becoming a $10 billion company.
  • UiPath, a robotic process automation software company, said it is offering new partner training, certification and marketing programs through its UiPath Services Network. The additions include an expanded set of materials such as product training and solution guides; new turnkey digital marketing programs; and new technology integrations with vendors such as Oracle, Salesforce, ServiceNow and Workday. The company earlier this year launched UiPath Academy for Partners, a partner-specific training portal.
  • Omdia, a market research firm based in London, said the COVID-19 pandemic will boost SaaS market revenue in 2020 by 4 to 5 percentage points compared with earlier estimates. However, IT infrastructure service revenue will fall by 2 to 3 percentage points this year compared with previous forecasts. The company said the rise in remote working and e-learning is bolstering demand for SaaS, while business closures have reduced demand for IaaS and other infrastructure services.
  • MarketsandMarkets, a market researcher based in Pune, India, forecasted the global managed network services market size to grow from $52.7 billion in 2020 to $71.6 billion by 2025. The research firm said the main drivers behind the market expansion include organizations needing to lower capital and operating expenditures, growing interest in digital transformation and new demands for connectivity. MarketsandMarkets predicted that managed WAN will be the largest managed network service segment during the forecast period.
  • Mitel, a business communications firm based in Dallas, said its MiCloud Flex private cloud is now available on Google Cloud as a wholesale offering. Availability is in the U.S., United Kingdom and France. Mitel said MiCloud Flex on Google Cloud offers its channel partners the potential to create new recurring revenue streams.
  • Agosto, a Pythian company and cloud services and development firm, rolled out a Managed G Suite Administration Services practice. The practice offers administrative and engineering support around G Suite onboarding and off boarding processes; user moves/changes/adds/deletes; ticket escalation and incident management; license management; change management; continuous training; and an annual G Suite review with a remediation plan.
  • Peak-Ryzex, a digital supply chain and mobile workforce solutions provider based in Columbia, Md., has entered a partnership with ShipTrack, a cloud-based logistics management platform.
  • Niagara Networks, based in San Jose, Calif., has expanded its channel program in the Americas, having previously established its Niagara Networks Majestics Partner Program in other regions. The company said that program is now fully operational in North America, noting several dozen channel companies joined the program prior to its formal introduction.
  • Exabeam, a SIEM vendor in Foster City, Calif., launched a formal practice for managed security service providers (MSSPs) and managed detection and response (MDR) providers within its partner program. The addition will provide structure and support for MSSP and MDR provider business models, the company said. Separately, Exabeam disclosed a “significant investment” in its Asia-Pacific and Japan region to accommodate increasing demand for its cybersecurity offerings.
  • SADA, a business and technology consultancy based in Los Angeles, officially launched the National Response Portal, which provides data and analytics in support of COVID-19 recovery. SADA built the portal, collaborating with HCA Healthcare, which originated the idea, and Google Cloud.
  • Infoblox, a company that offers cloud-managed network services based in Santa Clara, Calif., said it now provides a dedicated team of business development specialists for channel partners. Other channel partner program investments include new sales incentives and the expansion of its Professional Services Program to EMEA and Asia-Pacific.
  • Pulseway, a remote monitoring and management vendor, rolled out a new software package for MSPs. The package is tailored for MSPs supporting remote working environments, the vendor said. The package includes its IT management platform, as well as several built-in features, including unlimited remote control concurrent sessions, remote user chat and file transfer, and automation workflows. The pricing starts from $1.04 per license, a Pulseway spokesperson said.
  • Veristor Systems, a business technology solutions provider based in Atlanta, has been named a strategic member of Respond Software’s partner program. Veristor will offer its customers Respond Analyst, a software automation offering for security operations.
  • High Wire Networks, a cybersecurity service firm that sells to MSPs, added cloud detection and response (CDR) capabilities to its Overwatch Managed Security Platform as a Service offering. The company said the CDR technology aims to safeguard SaaS apps and public cloud infrastructure with automated attack detection, manual and automated threat hunting, prebuilt compliance reports, and manual and automated response.
  • Cloud distributor Pax8 is hosting a webinar series in place of its Wingman2020 event, originally planned as an in-person event in Denver next month. The Wingman Webinar Series will cover a range of partner-related topics, Pax8 said.

Market Share is a news roundup published every Friday.

Additional reporting by Spencer Smith.

Go to Original Article
Author:

Essential components and tools of server monitoring

Though server capacity management is an essential part of data center operations, it can be a challenge to figure out which components to monitor and what tools are available. How you address server monitoring can change depending on what type of infrastructure you run within your data center, as virtualized architecture requirements differ from on-premises processing needs.

With the capacity management tools available today, you can monitor and optimize servers in real time. Monitoring tools keep you updated on resource usage and automatically allocate resources between appliances to ensure continuous system uptime.

For a holistic view of your infrastructure, capacity management software should monitor these server components to some degree. Tracking these components can help you troubleshoot issues and predict any potential changes in processing requirements.

CPU. Because CPUs handle basic logic and I/O operations, as well as route commands for other components in the server, they’re always in use. High CPU usage can indicate an issue with the CPU, but more likely it’s a sign that the issue is with a connected component. Above 70% utilization applications on the server can become sluggish or stop responding.

Memory. High memory usage can result from multiple concurrent applications, but a faulty process that’s usually less resource-intensive may cause additional issues. The memory hardware component itself rarely fails, but you should investigate performance when its usage rates rise.

Storage area network. SAN component issues can occur at several points, including connection cabling, host bus adapters, switches and the storage servers themselves. A single SAN server can host data for multiple applications and often span multiple physical sites, which leads to significant business effects if any component fails.

Server disk capacity. Storage disks help alleviate storage issues and reduce bottlenecks for data storage with the right amount of capacity. Problems can arise when more users access the same application that uses a particular storage location, or if a resource-intensive process is located on a server not designed for the application. If you can’t increase disk capacity, you can monitor it and investigate when rates rise, so you can optimize future usage.

Storage I/O rates. You should also monitor storage I/O rates. Bottlenecks and high I/O rates can indicate a variety of issues, including CPU problems, disk capacity limitations, process bugs and hardware failure.

Physical temperatures of servers. Another vital component to monitor is server temperatures. Data centers are cooled to prevent any hardware component problems, but temperatures can increase for a variety of reasons: HVAC failure, internal server hardware failure (CPU, RAM or motherboard), external hardware failure (switches and cabling) or a software failure (firmware bug or application process issues).

OS, firmware and server applications. The entire server software stack must work together to ensure optimal usage (Basic I/O System, OS, hypervisors, drivers and applications.) Failed regular updates could lead to issues for the server, any hosted applications, faulty stakeholder user experience or downtime.

Streamline reporting with software tools

Most server monitoring software tracks and notifies you of any issues with servers in your technology stack. They include default and custom component monitoring, automated and manual optimization features, and standard and custom alerting options.

The software sector for server monitoring covers all types of architectures as well as required depth and breadth of data collection. Here is a shortlist of server capacity monitoring software for your data center.

SolarWinds Server & Application Monitor
SolarWinds’ software provides monitoring, optimization and diagnostic tools in a central hub. You can quickly identify which server resources are at capacity in real time, use historical reporting to track trends and forecast resource purchasing. Additional functions let you diagnose and fix virtual and physical storage capacity bottlenecks that affect application health and performance.

HelpSystems Vityl Capacity Management
Vityl Capacity Management is a comprehensive capacity management offering that makes it easy for organizations to proactively manage performance and do capacity planning in hybrid IT setups. It provides real-time monitoring data and historical trend reporting, which helps you understand the health and performance of your network over time.

BMC Software TrueSight Capacity Optimization
The TrueSight Capacity Optimization product helps admins plan, manage and optimize on-premises and cloud server resources through real-time and predictive features. It provides insights into multiple network types (physical, virtual or cloud) and helps you manage and forecast server usage.

VMware Capacity Planner
As a planning tool, VMware’s Capacity Planner can gather and analyze data about your servers and better forecast future usage. The forecasting and prediction functionality provides insights on capacity usage trends, as well as virtualization benchmarks based on industry performance standards.

Splunk App for Infrastructure
The Splunk App for Infrastructure (SAI) is an all-in-one tool that uses streamlined workflows and advanced alerting to monitor all network components. With SAI, you can create custom visualizations and alerts for better real-time monitoring and reporting through metric grouping and filtering based on your data center and reporting needs.

Go to Original Article
Author:

Data center energy usage combated by AI efficiency

Data centers have become an important part of our data-driven world. They act as a repository for servers, storage systems, routers and all manner of IT equipment and can stretch as large as an entire building — especially in an age of AI that requires advanced computing

Establishing how much power these data centers utilize and the environmental impact they have can be difficult, but according to a recent paper in Science Magazine, the entire data center industry in 2018 utilized an estimated 205 TWh. This roughly translates to 1% of global electricity consumption.

Enterprises that utilize large data centers can use AI, advancements in storage capacity and more efficient servers to mitigate the power required for the necessary expansion of data centers.

The rise of the data center

Collecting and storing data is fundamental to business operation, and while having your own infrastructure can be costly and challenging, having unlimited access to this information is crucial to advancements.

Provoking the most coverage because of their massive size, data centers of tech giants like Google and Amazon often require the same amount of energy as small towns. But there is more behind these numbers, according to Eric Masanet, associate professor of Mechanical Engineering and Chemical and Biological Engineering at Northwestern University and coauthor of the aforementioned article.

The last detailed estimates of global data center energy use appeared in 2011, Masanet said.

Since that time, Masanet said, there have been many claims that the world’s data centers were requiring more and more energy. This has given policymakers and the public the impression that data centers’ energy use and related carbon emissions have become a problem.

Counter to this, Masanet and his colleagues’ studies on the evolution of storage, server and network technology found that efficiency gains have significantly mitigated the growth in energy usage in this area. From 2010 to 2018, compute instances went up by 550%, while energy usage increased just 6% in the same time frame. While data center energy usage is on the rise, it has been curbed dramatically through the development of different strategies.

Getting a step ahead of the data center footprint

The workings behind mediated energy increases are all tied to advancements in technology. Servers have become more efficient, and the partitioning of servers through server virtualization has curbed the energy required for the rapid growth of compute instances.

A similar trend is noticeable in the storage of data. While the demand has significantly increased, the combination of storage-drive efficiencies and densities has limited total increase of global storage energy usage to just threefold. To further curb the rising desire for more data and therefore the rising energy costs and environmental impact, companies integrating AI when designing their data centers.

Data center efficiency gains have stalled
Data center efficiency has increased greatly but may be leveling off.

“You certainly could leverage AI to analyze utility consumption data and optimize cost,” said Scott Laliberte, a managing director with Protiviti and leader of the firm’s Emerging Technologies practice.

“The key for that would be having the right data available and developing and training the model to optimize the cost.”  

By having AI collect data on their data centers and optimizing the energy usage, these companies can help mitigate the power costs, especially concerning cooling, one of the more costly and concerning of the processes within data centers.

“The strategy changed a little bit — like trying to build data centers below ground or trying to be near water resources,” said Juan José López Murphy, Technical Director and Data Science Practice Lead at Globant, a digitally native services company.

But cooling these data centers has been such a large part of their energy usage that companies have had to be creative. Companies like AWS and GCP are trying new locations like the middle of the desert or underground and trying to develop cooling systems that are based on water and not just air, Murphy said.

Google utilizes an algorithm that manages cooling at some of their data centers that can learn from data gathered and limit energy consumption by adjusting cooling configurations.

Energy trends

For the time being, both the demand for data centers and their efficiency has grown. Now the advancement of servers and storage drives as well as the implementation of AI in the building process has almost matched the growing energy demand. This may not continue, however.

“Historical efficiency gains may not be able to outpace rapidly rising demand for data center services in the not-too-distant future,” Masanet said. “Clearly greater attention to data center energy use is warranted.”

The increased efficiencies have done well to stem the tide of demand, but the future remains uncertain for data center’s energy requirements.

Go to Original Article
Author:

Deploy and configure WSUS 2019 for Windows patching needs

Transcript – Deploy and configure WSUS 2019 for Windows patching needs

In this video, I want to show you how to deploy the Windows Server Update Services, or WSUS, in Windows Server 2019.

I’m logged into a Windows Server 2019 machine that is domain-joined. Open Server Manager and click on Manage, then go to Add Roles and Features to launch the wizard.

Click Next and choose the Role-based or feature-based installation option and click Next. Select your server from the server pool and click Next to choose the roles to install.

Scroll down and choose the Windows Server Update Services role, then click Add Features. There are no additional features needed, so click Next.

At the WSUS screen: If you need SQL Server connectivity, you can enable it here. I’m going to leave that checkbox empty and click Next.

I’m prompted to choose a location to store the updates that get downloaded. I’m going to store the updates in a folder that I created earlier called C:Updates. Click Next to go to the confirmation screen. Everything looks good here, so I’ll click Install.

After a few minutes, the installation process completes. Click Close.

The next thing that we need to do is to configure WSUS for use. Go to the notifications icon and click on that. We have some post-deployment configuration tasks that need to be performed, so click on Launch Post-Installation tasks. After a couple of minutes, the notification icon changes to a number. If I click on that, then we can see the post-deployment configuration was a success.

Close this out and click on Tools, and then click on Windows Server Update Services to open the console. Select the WSUS server and expand that to see we have a number of nodes underneath the server. One of the nodes is Options. Click on Options and then click on WSUS Server Configuration Wizard.

Click Next on the Before You Begin screen and then I’m taken to the Microsoft Update Improvement Program screen that asks if I want to join the program. Deselect that checkbox and click Next.

Next, we choose an upstream server. I can synchronize updates either from another Windows Server Update Services server or from Microsoft Update. This is the only WSUS server in my organization, so I’m going to synchronize from Microsoft Update, which is the default selection, and click Next.

I’m prompted to specify my proxy server. I don’t use a proxy server in my organization, so I’m going to leave that blank and click Next.

Click the Start Connecting button. It can take several minutes for WSUS to connect to the upstream update server, but the process is finally finished.

Now the wizard asks to choose a language. Since English is the only language spoken in my organization, I’m going to choose the option to download updates in English and click Next.

I’m asked which products I want to download updates for — I’m going to choose all products. I’ll go ahead and click Next.

Now I’m asked to choose the classifications that I want to download. In this case, I’m just going to go with the defaults [Critical Updates, Definition Updates, Security Updates and Upgrades]. I’ll click Next.

I’m prompted to choose a synchronization schedule. In a production organization, you’re probably going to want to synchronize automatically. I’m going to leave this set to synchronize manually. I’ll go ahead and click Next.

I’m taken to the Finished screen. At this point, we’re all done, aside from synchronizing updates, which can take quite a while to complete. If you’d like to start the initial synchronization process, now all you have to do is select the Begin Initial Synchronization checkbox and then click Next, followed by Finish.

That’s how you deploy and configure Windows Server Update Services.

+ Show Transcript

Go to Original Article
Author:

What’s new with PowerShell error handling?

Hitting errors — and resolving them — is an inevitable part of working with technology, and PowerShell is no exception.

No one writes perfect code. Your scripts might have a bug or will need to account for when a resource gets disconnected, a service hits a problem, or an input file is badly formatted. Learning how to interpret an error message, discover the root cause and handle the error gracefully is an important part of working with PowerShell. The development team behind the open source version of PowerShell 7 has improved PowerShell error handling both when you run a script and when you enter commands in a shell.

This article walks you through PowerShell error handling in a simple script and introduces several new features in PowerShell 7 that make the process more user-friendly.

How to find PowerShell 7

To start, be sure you have PowerShell 7 installed. This is the latest major release for the tool that had been called PowerShell Core up until the release of version 7. Microsoft still supports the Windows PowerShell 5.1 version but does not plan to give it the new features that the project team develops for open source PowerShell.

PowerShell 7 is available for Windows, Mac and Linux. The latest version can be installed from the PowerShell GitHub page.

On Windows, you can also use PowerShell 7 in the new Windows Terminal application, which offers improvements over the old Windows console host.

Error messages in previous PowerShell versions

A common problem for newcomers to Windows PowerShell 5.1 and the earlier PowerShell Core releases is that when something goes wrong, it’s not clear why.

For example, imagine you want to export a list of local users to a CSV file, but your script contains a typo:

Get-LocalUser |= Export-Csv local_users.csv

This is what you would see when you run the script:

PowerShell error message
Before the PowerShell 7 release, this is the type of error message that would display if there was a typo in a command.

The error code contains critical information — there’s an equals symbol that doesn’t belong — but it can be difficult to find in the wall of red text.

A longtime variable gets new purpose

Did you know that PowerShell has a preference variable called $ErrorView? Perhaps not because until now, it hasn’t been very useful.

The $ErrorView variable determines what information gets sent to the console and how it is formatted when an error occurs. The message can vary if you’re running a script file as opposed to entering a command in the shell.

In previous versions of PowerShell, $ErrorView defaulted to NormalView — this is the source of the wall of red text seen in the previous screenshot.

That all changes with PowerShell 7. There’s a new option for $ErrorView that is now the default called ConciseView.

Errors get clearer formatting in PowerShell 7

When we run the same command with the error in PowerShell 7 with the new default ConciseView, the error message is easier to understand.

ConciseView option
The new ConciseView option reduces the clutter and highlights the error location with a different color.

The new PowerShell error handling highlights the problem area in the command with a different color and does not overload you with too much information.

Let’s fix the typo and continue testing.

Shorter errors in the shell

Another error you might encounter when writing to a CSV is that the target file is locked. For example, it’s possible the file is open in Excel.

If you’re using PowerShell as a shell, the new default ErrorView will now give you just the error message with no extraneous information. You can see the length of the error from Windows PowerShell 5.1 and its NormalView below.

Windows PowerShell error message
The default error message in Windows PowerShell 5.1 provides a lot of information but not in a useful manner.

In contrast, PowerShell error handling in the newest version of the automation tool provides a more succinct message when a problem occurs due to the ConciseView option.

PowerShell 7 error message
The ConciseView option provides a more straightforward error message when a problem with a command occurs.

You can much more easily see that the file is locked and start thinking about fixing the problem.

Learning how to explore error records

We’ve seen how PowerShell 7 improves error messages by providing just the information you need in a more structured manner. But what should you do if you need to dig deeper? Let’s find out by continuing to use this error as an example: “The process cannot access the file … because it is being used by another process.”

Taking the terror out of $Error

Every time PowerShell encounters an error, it’s written to the $Error automatic variable. $Error is an array and the most recent error is $Error[0].

To learn more about the your most recent error in previous versions of PowerShell, you would explore $Error[0] with cmdlets such as Select-Object and Format-List. This type of examination is laborious: You can only expand one property at a time, and it’s easy to miss vital nested information contained in a handful of properties.

For example, look at the output from the command below.

$Error[0] | Select-Object *
$Error automatic variable
The $Error automatic variable in PowerShell before version 7 stored errors but was not flexible enough to give a deeper look at the properties involved.

There’s no way of knowing that a wealth of valuable data lives under the properties Exception and InvocationInfo. The next section shows how to get at this information.

Learning to explore with Get-Error

PowerShell 7 comes with a new cmdlet called Get-Error that gives you a way to survey all the information held within a PowerShell error record.

Run without any arguments, Get-Error simply shows the most recent error, as you can see in the screenshot below.

Get-Error cmdlet output
The new Get-Error cmdlet in PowerShell 7 gives you an easier way to get more information about errors.

You are immediately shown the hierarchy of useful objects and properties nested inside the error record. For example, you can see the Exception property isn’t a dump of information; it contains child properties, some of which have their own children.

If you want to reuse the error message in your code to write it to a log file or the Event Viewer, then you can use the following command to store the message:

$Error[0].Exception.Message

Use ErrorVariable to store error records

The Get-Error cmdlet also accepts error records from the pipeline. This is particularly handy if you use the -ErrorVariable common parameter to store errors for later inspection, which you can do with the following code:

# +myErrors means "add error to $myErrors variable"
Get-LocalUser | Export-Csv local_users.csv -ErrorVariable +myErrors
# Inspect the errors with Get-Error
$myErrors | Get-Error

By using Get-Error, you can see that an ErrorVariable holds information somewhat differently than the $Error variable. The error message is present in several places, most simply in a property named Message, as shown in the following screenshot.

ErrorVariable parameter
Using the ErrorVariable parameter gives a more flexible way to log errors rather than using the $Error variable, which saves every error in a session.

Bringing it all together

You’ve now used Get-Error to inspect error records, both from your shell history and from an ErrorVariable, and you’ve seen how to access a property of the error.

The final step is to tie everything together by reusing the property in your script. This example stores errors in $myErrors and writes any error messages out to a file:

Get-LocalUser | Export-Csv local_users.csv -ErrorVariable +myErrors
if ($myErrors) {
$myErrors.Message | Out-File errors.log -Append
}

If you want to get serious about scripting and automation, then it’s worth investigating the PowerShell error handling now that it got a significant boost in version 7. It’s particularly helpful to store errors to a variable for later investigation or to share with a colleague.

Go to Original Article
Author: