Tag Archives: other

Women as allies for women: Understanding intersectionality – The Official Microsoft Blog

One of my earliest learnings was that my experiences as a woman were not identical to other women’s experiences, although they were similar. As with any dimension of identity, the way women experience the world depends on much larger context. As a white girl growing up in Victoria, British Columbia, there were multiple layers to my experiences. Although my brothers and I had what was necessary, we did not have much socioeconomic privilege. What I learned as I watched the world around me is that as a benefit of my race, it was easier for me to cover my socioeconomic status than it was for my friends who were not white.

The United Nations marked March 8 as International Women’s Day by declaring that “fundamental freedoms require the active participation, equality and development of women everywhere.” This declaration is inclusive of all women with intersectionality in mind.

Understanding intersectionality in the workplace

It starts with something as simple as the way we think about all the dimensions of our identity, including things like race, ethnicity, disability, religion, age and sexual orientation. Even class, education, geography and personal history can alter how we experience womanhood. When Kimberlé Crenshaw coined the term intersectionality 30 years ago, she explained it as how these overlapping identities and conditions impact the way we experience life’s challenges and opportunities, the privileges we have, the biases we face.

So simply focusing on a single dimension of identity, without that context, is not always helpful. When we consider women as a single category, as a monolith, it can be misleading at best, dangerous at worst. Doing so overlooks the variations of circumstances and perspectives within the group and obscures real lived experiences as outliers or exceptions. “Women’s workplace issues” is a vague term without enough specificity to drive action. Women of color, women with disabilities, transgender women, women who are the first of their family to work corporate or professional jobs, women who are caregivers — all women deal with additional social, cultural, regional or community demands that may not exist for others. Although all women navigate varying degrees of conscious and unconscious gender biases, intersections of identity can place compounded pressure on a woman to downplay other aspects of her life to conform — a behavior called covering, as explored by Kenji Yoshino — leading to even greater workplace stress.

To increase hiring, retention, representation and the development of women in the workplace, companies must be intentional and accountable for being aware of the diversity within the diversity. Conventional strategies to increase the representation of women in a workplace have mostly benefited those who do not also experience intersectional challenges. By getting curious and exploring the lived experiences of women through the lens of intersectionality, we become more precise about the root cause and about finding ways to generate systemic solutions for all.

Setting the stage for allyship

 Understanding all this can be a powerful catalyst for change, not just for organizations as a whole but also for individuals. At Microsoft we are refining how we think about allyship. Part of that exploration is the recognition that as Microsoft employees each of us has some dimension of privilege. This isn’t meant to minimize or negate the very real ways that communities experience significant, systematic historical bias or oppression. But rather it is meant to shine a light on our opportunity to show up for each other. For example, as a community of women we have an opportunity to be more thoughtful about the experiences of our peers who face greater challenges due to their intersectional identity. So although traditionally we might look to men in the workplace to carry the full weight of allyship, women in the workplace also have an opportunity to be thoughtful allies for others in their community.

Such an awareness opens the door for true allyship — an intentional commitment to use your voice, credibility, knowledge, place or power to support others in the way they want to be supported. I am very aware of my opportunity, due to my personal privilege, to show up for other women in a meaningful way. I embrace my obligation to create space for other voices to be heard, not just on International Women’s Day, but all year round.

Tags: ,

Go to Original Article
Author: Microsoft News Center

How Amazon HR influences hiring trends

Amazon is a powerhouse when it comes to recruiting. It hires at an incredible pace and may be shaping how other firms hire, pay and find workers. But it also offers a cautionary tale, especially in the use of AI.

Amazon HR faces a daunting task. The firm is adding thousands of employees each quarter through direct hiring and acquisitions. In the first quarter of 2019, it reported having 630,000 full and part-time employees. By the third quarter, that number rose 19% to 750,000 employees.

Amazon’s hiring strategy includes heavy use of remote workers or flex jobs, including a program called CamperForce. The program was designed for nomadic people who live full or part-time in recreational vehicles. They help staff warehouses during peak retail seasons.

Amazon’s leadership in remote jobs can be measured by FlexJobs, a site that specializes in connecting professionals to remote work. Amazon ranked sixth this year out of the 100 top companies with remote jobs. FlexJobs’ rankings are based on data from some 51,000 firms. The volume of job ads determines ranking.

The influence of large employers

Amazon’s use of remote work is influential, said Brie Reynolds, career development manager and coach at FlexJobs. There is “a lot of value in seeing a large, well-known company — a successful company — employing remote workers,” she said.

In April, Amazon CEO Jeff Bezos challenged other retailers to raise their minimum wage to $15, which is what Amazon did in 2018. “Better yet, go to $16 and throw the gauntlet back at us,” said Bezos, in his annual letter to shareholders.

But the impact of Amazon’s wage increase also raises questions.

“Amazon is such a large employer that increases for Amazon’s warehouse employees could easily have a large spillover effect raising wage norms among employers in similar industries and the same local area,” said Michael Reich, a labor market expert and a professor of economics at the University of California at Berkeley. But without more data from Amazon and other companies in the warehouse sector, he said it’s difficult to tell where the evidence falls.

Amazon HR’s experience with AI in recruiting may also be influential, but as a warning.

The warning from Amazon

In late 2018, Reuters reported that Amazon HR developed an algorithm for hiring technical workers. But because of its training, the algorithm was recommending men over women. The technical workforce suffers from a large gender gap.

The Amazon experience “shows that all historical data contains an observable bias,” said John Sumser, principal analyst at HRExaminer. “In the Amazon case, utilizing historical data perpetuated the historical norm — a largely male technical workforce.”

Any AI built on anything other than historical data runs the distinct risk of corrupting the culture of the client, Sumser said.

In July, Amazon said it would spend $700 million to upskill 100,000 U.S. workers through 2025. The training program amounts to about $1,000 a year per employee, which may be well less than Amazon HR’s cost of hiring new employees.

They’re not taking advantage of the opportunity to be a role model.
Josh BersinIndependent HR analyst

In late 2018, Amazon HR’s talent acquisition team had more than 3,500 people. The company is interested in new HR tech and takes time to meet with vendors, said an Amazon recruiting official at the HR Technology Conference and Expo.

But Amazon, overall, doesn’t say much about its HR practices and that may be tempering the company’s influence, said Josh Bersin, an independent HR analyst.

Bersin doesn’t believe the industry is following Amazon. And part of his belief is due to the company’s Apple-like secrecy on internal operations, he said.

“I think people are interested in what they’re doing, and they probably are doing some really good things,” Bersin said. “But they’re not taking advantage of the opportunity to be a role model.”

Go to Original Article
Author:

Zerto plays big role in McKesson’s disaster recovery plan

For McKesson Corporation, downtime may literally be a matter of life or death.

Hospitals and other healthcare facilities can’t reasonably keep every type of drug in stock in their own dispensaries. McKesson distributes drugs and medical supplies to hospitals, both during routine resupplies and emergencies. A strong and properly tested disaster recovery plan ensures nothing stops those important deliveries.

“We’re delivering pharmaceuticals. If we can’t ship product, somebody could die,” said Jeffrey Frankel, senior disaster recovery engineer at McKesson Corporation.

McKesson Corporation is a Fortune 7 pharmaceutical giant with about 70,000 employees and business units spread across the world. From an IT perspective, each of those units run autonomously — there is no single IT infrastructure that connects all of them. Each location has its own IT staff without standardized technology stacks.

Still, all disaster recovery (DR) inside McKesson is handled by a central DR group, which Frankel is a part of. He said the biggest reason for this was to standardize DR practices across the business units and make it easier to establish and follow protocol.

“Individual units might be using VMware or Hyper-V or anything at all. But security standards and DR standards need to meet ours,” Frankel said.

A centralized DR group also made it easier to test and prove recoverability. Frankel said this was especially important for keeping insurance and auditors satisfied.

McKesson began using Zerto six years ago, and it was the first time the organization used a third-party vendor for DR. Frankel and his DR group were only responsible for the pharmaceutical side of the business at the time, and they were previously using VMware Site Recovery Manager (SRM). However, Frankel said Zerto proved to be so much more efficient than SRM that the DR group’s responsibility expanded to the entire organization.

Headshot of Jeffrey FrankelJeffrey Frankel

One key feature that led Frankel to a Zerto purchase was journaling that allows for point-in-time recovery. He said this is a key difference between high availability (HA) and DR that many in his organization didn’t initially understand. McKesson was already replicating to a second site, which solved the HA use case, but DR needs the additional functionality of restoring to an earlier version if files are corrupted or compromised.

Frankel evaluated Actifio, Veeam and SRM, and said Zerto had them beat on functionality, ease of use and flexibility. McKesson’s business units have a wide array of failover setups, including on premises to Microsoft Azure, on premises to another on-premises data center, Microsoft Hyper-V to Azure cloud, VMware to Azure and VMware to IBM data availability as a service. Zerto worked with all of these setups, in addition to lowering McKesson’s RTOs and RPOs.

“We have a wide variety of implementations, but none of our RPOs are ever above 15 minutes,” Frankel said.

DR isn’t just the technology behind it. McKesson’s group is broken down into three teams, each handling a different aspect of DR.

The first team handles business continuity from the facility standpoint. They focus on the portion of the disaster recovery plan that deals with what to do if the facility is compromised and where workers go in order to continue working.

A second team focuses on consulting and logistics. This team works with executives to outline the scope of what’s needed for DR, including what’s considered mission-critical and the order in which business applications need to be brought back. This team also schedules tests and handles logistics and coordination when disaster strikes.

Finally, the engineering team, which Frankel is a part of, is responsible for all the technical aspects of the disaster recovery plan. They piece together the IT tools that make the previous team’s plan work.

One new feature Zerto introduced that Frankel wants to expand is its analytics capabilities. Before this was implemented, he would have to give direct access to the Zerto console to consultants, auditors and other non-IT personnel in order to look at the data. This meant untrained staff could accidentally start a failover process. The analytics and reporting functions have removed that risk.

“We didn’t want to give nontechnical people admin rights. Now, they can’t break anything,” Frankel said.

Go to Original Article
Author:

Navy sails SAP ERP systems to AWS GovCloud

The U.S. Navy has moved several SAP and other ERP systems from on premises to AWS GovCloud, a public cloud service designed to meet the regulatory and compliance requirements of U.S. government agencies.

The project entailed migrating 26 ERPs across 15 landscapes that were set up around 60,000 users across the globe. The Navy tapped SAP National Security Services Inc. (NS2) for the migration. NS2 was spun out of SAP specifically to sell SAP systems that adhere to the highly regulated conditions that U.S. government agencies operate under.

Approximately half of the systems that moved to AWS GovCloud were SAP ERP systems running on Oracle databases, according to Harish Luthra, president of NS2 secure cloud business. SAP systems were also migrated to the SAP HANA database, while non-SAP systems remain on their respective databases.

Architecture simplification and reducing TCO

The Navy wanted to move the ERP systems to take advantage of the new technologies that are more suited for cloud deployments, as well as to simplify the underlying ERP architecture and to reduce the total cost of ownership (TCO), Luthra said.

The migration enabled the Navy to reduce the data size from 80 TB to 28 TB after the migration was completed.

Harish LuthraHarish Luthra

“Part of it was done through archiving, part was disk compression, so the cost of even the data itself is reducing quite a bit,” Luthra said. “On the AWS GovCloud side, we’re using one of the largest instances — 12 terabytes — and will be moving to a 24 terabyte instance working with AWS.”

The Navy also added applications to consolidate financial systems and improve data management and analytics functionality.

“We added one application called the Universe of Transactions, based on SAP Analytics that allows the Navy to provide a consolidated financial statement between Navy ERP and their other ERPs,” Luthra said. “This is all new and didn’t exist before on-premises and was only possible to add because we now have HANA, which enables a very fast processing of analytics. It’s a giant amount of transactions that we are able to crunch and produce a consolidated ledger.”

Joe GioffreJoe Gioffre

Accelerated timeline

The project was done at an accelerated pace that had to be sped up even more when the Navy altered its requirements, according to Joe Gioffre, SAP NS2 project principal consultant. The original go-live date was scheduled for May 2020, almost two years to the day when the project began. However, when the Navy tried to move a command working capital fund onto the on-premises ERP system, it discovered the system could not handle the additional data volume and workload.

This drove the HANA cloud migration go-live date to August 2019 to meet the fiscal new year start of Oct. 1, 2019, so the fund could be included.

“We went into a re-planning effort, drew up a new milestone plan, set up Navy staffing and NS2 staffing to the new plan so that we could hit all of the dates one by one and get to August 2019,” Gioffre said. “That was a colossal effort in re-planning and re-resourcing for both us and the Navy, and then tracking it to make sure we stayed on target with each date in that plan.”

It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations.
Joshua GreenbaumPrincipal, Enterprise Applications Consulting

Governance keeps project on track

Tight governance over the project was the key to completing it in the accelerated timeframe.

“We had a very detailed project plan with a lot of moving parts and we tracked everything in that project plan. If something started to fall behind, we identified it early and created a mitigation for it,” Gioffre explained. “If you have a plan that tracks to this level of detail and you fall behind, unless you have the right level of governance, you can’t execute mitigation quickly enough.”

The consolidation of the various ERPs onto one SAP HANA system was a main goal of the initiative, and it now sets up the Navy to take advantage of next-generation technology.

“The next step is planning a move to SAP S/4HANA and gaining process improvements as we go to that system,” he said.

Proving confidence in the public cloud

It’s not a particular revelation that public cloud hyperscaler storage providers like AWS GovCloud can handle huge government workloads, but it is notable that the Department of Defense is confident in going to the cloud, according to analyst Joshua Greenbaum, principal at Enterprise Applications Consulting, a firm based in Berkeley, Calif.

“The glitches that happened with Amazon recently and [the breach of customer data from Capital One] highlight the fact that we have a long way to go across the board in perfecting the cloud model,” Greenbaum said. “But I think that SAP and its competitors have really proven that stuff does work on AWS, Azure and, to a lesser extent, Google Cloud Platform. They have really settled in as legitimate strategic platforms and are now just getting the bugs out of the system.”

Greenbaum is skeptical that the project was “easy,” but it would be quite an accomplishment if it was done relatively painlessly.

“Every time you tell me it was easy and simple and painless, I think that you’re not telling me the whole story because it’s always going to be hard,” he said. “And these are government systems, so they’re not trivial and simple stuff. But this may show us that if the will is there and the technology is there, you can do it. It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations, so it’s always going to be hard.”

Go to Original Article
Author:

Capital One breach suspect may have hit other companies

A new report looking into the attacker accused in the Capital One breach discovered references to other potential victims, but no corroborating evidence has been found yet.

The FBI accused Paige Thompson, who allegedly went by the name “Erratic” on various online platforms, including an invite-only Slack channel. The Slack channel was first reported on by investigative cybersecurity journalist Brian Krebs, who pointed out that file names referenced in the channel pointed to other organizations potentially being victims of similar attacks.

A new report by cybersecurity firm CyberInt, based in London, regarding the Capital One breach built on the information discovered by Krebs. Jason Hill, lead cybersecurity researcher at CyberInt, said the company was able to gain access to the Slack channel via an open invitation link.

“This link was obtained from the now-offline ‘Seattle Warez Kiddies’ Meetup group (Listed as ‘Organized by Paige Thomson’),” Hill wrote via email. “Based on the publicly available information at the time of report completion, such as Capital One’s statement and the [FBI’s] Criminal Complaint, we were able to conduct open source intelligence gathering to fill in some of the missing detail and follow social media leads to gain an understanding of the alleged threat actor and their activity over the past months.”

According to Hill, CyberInt researchers followed the trail through a GitHub account, GitLab page and a screenshot of a file archival process shared in the Slack channel.

“The right-hand side of the screen appears to show the output of the Linux command ‘htop’ that lists current processes being executed. In this case, under the ‘Command’ heading, we can see a number of ‘tar –remove-files -cvf – ‘ processes, which are compressing data (and then removing the uncompressed source),” Hill wrote. “These files correlate with the directory listing, and potential other victims, as seen later within the Slack channel.”

Between the files named in the screenshot and the corresponding messages in the Slack channel, it appeared as though in addition to the Capital One breach, the threat actor may have stolen 485 GB of data from various other organizations. Some organizations were implied by only file names, such as Ford, but others were named directly by Erratic in messages, including the Ohio Department of Transportation, Michigan State University, Infoblox and Vodafone.

Hill acknowledged that CyberInt did not directly contact any of the organizations named, because the company policy is normally to “contact organizations when our research detects specific vulnerabilities that can be mitigated, or threats detected by our threat intelligence platform.

“However in this case, our research was focused on the Capital One breach to gain an understanding of the threat actor’s tactics, techniques and procedures (TTP) and resulted in the potential identification of additional victims rather than the identification of any specific vulnerability or ongoing threat,” Hill wrote. “Our report offered general advice for those concerned about the TTP based on these findings.”

We contacted some of the organizations either directly named or implied via file name in Erratic’s Slack channel. The Ohio Department of Transportation did not respond to a request for comment. Ford confirmed an investigation is underway to determine if the company was the victim of a data breach.

A spokesperson for Michigan State University also confirmed an investigation is underway and the university is cooperating with law enforcement authorities, but at this point there is “no evidence to suggest MSU was compromised.”

Similarly, an Infoblox spokesperson said the company was “continuing to investigate the matter, however, at this time, there is no indication that Infoblox was in any way involved with the Capital One breach. Additionally, there is no indication of an intrusion or data breach causing Infoblox customer data to be exposed.”

A Vodafone spokesperson claimed the company takes security seriously, but added, “Vodafone is not aware of any information that relates to the Capital One security breach.”

Go to Original Article
Author:

Data ethics issues create minefields for analytics teams

GRANTS PASS, Ore. — AI technologies and other advanced analytics tools make it easier for data analysts to uncover potentially valuable information on customers, patients and other people. But, too often, consultant Donald Farmer said, organizations don’t ask themselves a basic ethical question before launching an analytics project: Should we?

In the age of GDPR and like-minded privacy laws, though, ignoring data ethics isn’t a good business practice for companies, Farmer warned in a roundtable discussion he led at the 2019 Pacific Northwest BI & Analytics Summit. IT and analytics teams need to be guided by a framework of ethics rules and motivated by management to put those rules into practice, he said.

Otherwise, a company runs the risk of crossing the line in mining and using personal data — and, typically, not as the result of a nefarious plan to do so, according to Farmer, principal of analytics consultancy TreeHive Strategy in Woodinville, Wash. “It’s not that most people are devious — they’re just led blindly into things,” he said, adding that analytics applications often have “unforeseen consequences.”

For example, he noted that smart TVs connected to home networks can monitor whether people watch the ads in shows they’ve recorded and then go to an advertiser’s website. But acting on that information for marketing purposes might strike some prospective customers as creepy, he said.

Shawn Rogers, senior director of analytic strategy and communications-related functions at vendor Tibco Software Inc., pointed to a trial program that retailer Nordstrom launched in 2012 to track the movements of shoppers in its stores via the Wi-Fi signals from their cell phones. Customers complained about the practice after Nordstrom disclosed what it was doing, and the company stopped the tracking in 2013.

“I think transparency, permission and context are important in this area,” Rogers said during the session on data ethics at the summit, an annual event that brings together a small group of consultants and vendor executives to discuss BI, analytics and data management trends.

AI algorithms add new ethical questions

Being transparent about the use of analytics data is further complicated now by the growing adoption of AI tools and machine learning algorithms, Farmer and other participants said. Increasingly, companies are augmenting — or replacing — human involvement in the analytics process with “algorithmic engagement,” as Farmer put it. But automated algorithms are often a black box to users.

Mike Ferguson, managing director of U.K.-based consulting firm Intelligent Business Strategies Ltd., said the legal department at a financial services company he works with killed a project aimed at automating the loan approval process because the data scientists who developed the deep learning models to do the analytics couldn’t fully explain how the models worked.

We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.
Mike FergusonManaging director, Intelligent Business Strategies Ltd.

And that isn’t an isolated incident in Ferguson’s experience. “There’s a loggerheads battle going on now in organizations between the legal and data science teams,” he said, adding that the specter of hefty fines for GDPR violations is spurring corporate lawyers to vet analytics applications more closely. As a result, data scientists are focusing more on explainable AI to try to justify the use of algorithms, he said.

The increased vetting is driven more by legal concerns than data ethics issues per se, Ferguson said in an interview after the session. But he thinks that the two are intertwined and that the ability of analytics teams to get unfettered access to data sets is increasingly in question for both legal and ethical reasons.

“It’s pretty clear that legal is throwing their weight around on data governance,” he said. “We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.”

Jill Dyché, an independent consultant who’s based in Los Angeles, said she expects explainable AI to become “less of an option and more of a mandate” in organizations over the next 12 months.

Code of ethics not enough on data analytics

Staying on the right side of the data ethics line takes more than publishing a corporate code of ethics for employees to follow, Farmer said. He cited Enron’s 64-page ethics code, which didn’t stop the energy company from engaging in the infamous accounting fraud scheme that led to bankruptcy and the sale of its assets. Similarly, he sees such codes having little effect in preventing ethical missteps on analytics.

“Just having a code of ethics does absolutely nothing,” Farmer said. “It might even get in the way of good ethical practices, because people just point to it [and say], ‘We’ve got that covered.'”

Instead, he recommended that IT and analytics managers take a rules-based approach to data ethics that can be applied to all three phases of analytics projects: the upfront research process, design and development of analytics applications, and deployment and use of the applications.

Go to Original Article
Author:

How to Set Up Hyper-V VM Groups with PowerShell

The other day I was poking around the Hyper-V PowerShell module and I came across a few commands that I must have initially ignored. I had no idea this feature existed until I came across the cmdlets. I can’t find anything in the Hyper-V Manager console that exposes this feature as well so if you want to take advantage of it, PowerShell is the way to go. It turns out that you can organize your virtual machines into groups with these commands.

VM Group cmdlets in PowerShell

NOTE: You should take the time to read through full help and examples before using any of these commands.

Creating a VM Group

The commands for working with VM Groups support remote connections and credentials. The default is the local host and take note that you can only specify credentials for remote connections. Creating a new group is a pretty simple matter. All you need is a group name and type.  You can create a group that is a collection of virtual machines (VMCollectionType) or a group that is a collection of other groups (ManagementCollection). I’m going to create a group for virtual machines.

Creating a new VM group

I’ve created the group and can retrieve it with Get-VMGroup.

Retrieving a VM Group with PowerShell

You can only create one group at a time, but you can create the same group on multiple servers.

This command created the management group Master on both Hyper-V hosts.

Adding a Group Member

Adding members to a group requires a separate step but isn’t especially difficult. To add members to a VMCollectionType group, you need references to the virtual machine object.

The command won’t write anything to the pipeline unless you use -Passthru. You can take advantage of nested expressions and create a group with members all in one line.

With this one-line command, I created another group call Win and added a few virtual machines to the group.

Creating a VM Group and Members in a PowerShell one-liner

Since I have two groups, let me add them to the management group.

Adding VM Management Groups with PowerShell

And yes, you can put a virtual machine in more than one group.

Retrieving Groups and Group Members

Using Get-VMGroup is pretty straightforward. Although once you understand the object output you can customize it.

Enumerating VM Groups

Depending on the group type you will have a nested collection of objects. You can easily enumerate them using dotted object notation.

Listing VM Group virtual machines

You can do something similar with management groups.

Expanding VM management groups with PowerShell

Be aware that it is possible to have nested management groups which might make unrolling things to get to the underlying virtual machines a bit tricky. I would suggest restraint until you fully understand how VM groups work and how you intend to take advantage of them.

The output of the VMMembers property is the same virtual machine object you would get using Get-VM so you can pipe this to other commands.

Incorporating VM Groups into a PowerShell expression

Group membership can also be discovered from the virtual machine.

Listing groups from the virtual machine

You cannot manage group membership from the virtual machine itself.

To remove groups and members, the corresponding cmdlets should be self-evident and follow the same patterns I demonstrated for creating VM groups and adding members.

Potential Obstacles

When you assign a virtual machine to a group, the membership is linked to the virtual machine. This may pose a challenge down the road. I haven’t setup replication with a virtual machine that belongs to a group so I’m not sure what the effect if any, might be. But I do know that if you export a virtual machine that belongs to a group and import the virtual machine on a different Hyper-V host, you’ll encounter an error about the missing group. You can remove the group membership on import so it isn’t that much of a showstopper. Still, you might want to remove the virtual machine from any groups prior to exporting.

Doing More

The VM group cmdlets are very useful, but not useful enough. At least for me. I have a set of expectations for using these groups and the cmdlets as written don’t meet those expectations. Fortunately, I can write a few PowerShell functions to get the job done. Next time, I’ll share the tools I’m building around these commands.

In the meantime, I hope you’ll share how you think you’ll take advantage of this feature!

Want to boost your Hyper-V performance? Discover 6 Hardware Tweaks that will Skyrocket your Hyper-V Performance

Go to Original Article
Author: Jeffery Hicks