Tag Archives: systems

Gigster data suggests gig economy methods can benefit SIs

Systems integrators can cut staffing costs and boost project success rates if they take a cue from the gig economy.

A Constellation Research study published this week reported gig economy IT projects used 30% fewer FTEs over time compared with projects staffed the traditional way. In addition, gig economy projects experienced a 9% failure rate versus the IT industry average of 70% to 81%, according to market research firm. Large IT initiatives such as digital transformation projects are especially prone to encountering obstacles.

Gig economy staffing “lowers risk substantially by boosting IT success rates dramatically,” according to Constellation Research’s report, “How the Gig Economy Is Reshaping Tech Careers and IT Itself.” The report’s findings are based on Constellation Research’s analysis of project data from Gigster, a gig economy platform that focuses on IT staffing.

The research firm said it compared data from 190 Gigster projects with typical industry projects. Gigster helped fund the research.

From outsourcing to crowdsourcing

Gigster CEO Chris Keene said the gig approach — crowdsourcing IT personnel on demand — will “change the way systems integrators think about their talent.” The Constellation Research report, he added, suggests the application of gig economy best practices “could be as big for systems integrators as outsourcing was years ago.”

Outsourcing’s labor arbitrage made its mark in the systems integration field beginning in the 1990s. Today, crowdsourcing and what Keene refers to as “elastic staffing,” also aim to reduce personnel expenses on IT projects. He said Gigster’s Innovation Management Platform, a SaaS offering, lets organizations assess talent based on the quality of the work an IT staffer performed on previous projects. The tool provides sentiment analysis, polling an IT staffer’s teammates and customers to gauge their satisfaction. The tool’s elastic staffing capabilities, meanwhile, are used to identify on-demand peer experts who review a project’s key deliverables. As an integrator reaches a project milestone, it can use the Gigster platform to conduct sentiment analysis and an expert review before moving on to the next phase.

The increase in staffing efficiency stems from elastic versus static approaches. The Constellation Research report noted personnel assigned to a traditional project tend to remain on the team, even when their skills are not in high demand during particular project phases.

“Activity shifts among project team members over time, resulting in a relatively inefficient model because underutilized people continue to add to the project budget and overhead,” the report observed. For example, architects may play their biggest role at the onset of a project, while demand for developers and QA personnel grow over the course of a project.

Agile staffing

Keene said the gig economy approach avoids locking people into specialized projects for long periods of time, making the staffing process more agile. He said when organizations discuss agile approaches, they are typically referring to their delivery processes.

“Most agile projects are only agile in the way they drive their processes,” Keene said. “They are not agile in the way they resource those projects.”

Keene, meanwhile, attributed the increase in project success rates to the peer review function. He said bringing in an on-demand expert to review a project milestone helps avoid the “groupthink” that can derail an IT initiative.

Gigster is based in San Francisco.

SAP elevates partner-developed apps

SAP has revamped SAP App Center, an online marketplace where customers can purchase SAP-based apps developed by partners.

The SAP App Center now features an updated user experience to make it easier for customers to search for offerings based on the underlying SAP product used, certification, publisher and solution type, the vendor said. SAP unveiled the updated SAP App Center alongside several other new partner initiatives at the company’s Global Partner Summit, held online on June 3.

“The [SAP App Center] should be the place to go for all the customers we have. … And it should be the place to go for all of our account executives. We are going to make it easy for our customers to go there, and we are going to run campaigns to enable our customers to find what they need in the app center,” SAP chief partner offer Karl Fahrbach said during the Global Partner Summit event.

The marketplace currently features more than 1,500 partner-created offerings, according to SAP.

On the partner-facing side of the SAP App Center, the company added tools to publish SAP-based offerings and manage and track sales. “We are working very hard on reducing the time that it takes to publish an app on the app center,” Fahrbach said.

SAP said a new initiative, SAP Endorsed Apps, aims to bolster SAP partners’ software businesses by spotlighting partner apps and matching them with potential customers. SAP Endorsed Apps is an invitation-only initiative.

In addition to updating the SAP App Center, the company said it is focused on improving how partners approach SAP implementation projects. To that end, SAP introduced a set of standard processes, tools and reporting aids designed to facilitate implementations. Benefits include grants for educating partners’ consultants, incentives for partners that invest in customer success, and increased investments in partner learning and enablement, SAP said.

Fahrbach also said SAP is opening its pre-sales software demonstration environment to qualifying partners for free. Additionally, on July 1, SAP will offer partners one year of free access to SAP S/4HANA Cloud and Business ByDesign.

Channel partners find allies in backup and DR market

Several channel companies this week disclosed partnerships and distribution deals in the backup and disaster recovery (DR) market.

OffsiteDataSync, a J2 Global company offering DR and backup as a service, rolled out an expanded partnership with Zerto. The Zerto relationships lets OffsiteDataSync provide DRaaS options to a “broad spectrum of businesses,” according to OffsiteDataSync, which also partners with Veeam.

In another move, Otava, a cloud services company based in Ann Arbor, Mich., launched Otava Cloud Backup for Microsoft 365, partnering with Veeam. The Microsoft 365 SaaS offering, available for channel partners, follows the November 2019 launch of Veeam-based backup offerings such as Otava Cloud Connect, Otava-Managed Cloud Backup and Self-Managed Cloud Backup.

Meanwhile, Pax8, a cloud distributor based in Denver, added Acronis Cyber Protect to its roster of offerings in North America. The Acronis Cyber Protect service includes backup, DR, antimalware, cybersecurity and endpoint management tools.

Other news

  • A survey of IT professionals found 24% of businesses adapted to the COVID-19 pandemic without downtime, with 56% reporting two or fewer weeks of downtime. The study from Insight Enterprises, an integrator based in Tempe, Ariz., noted 40% of respondents said they had to develop or retool business resiliency plans in response to COVID-19. Insight also found IT departments are planning to invest in a range of health-related technologies, including smart personal hygiene devices (58%), contactless sensors (36%), infrared thermometers (35%) and thermal cameras (25%). A third of the respondents are looking into an IoT ecosystem that would let them pull together and analyze data gathered from those devices.
  • Research from Advanced, an application modernization services provider, revealed that about three-quarters of organizations have launched a legacy system modernization project but failed to complete the task. The company pointed to a “a disconnect of priorities between technical and leadership teams” as an obstacle to getting projects over the finish line. Advanced’s 2020 mainframe modernization report also identified a broad push to the cloud: 98% of respondents cited plans to move legacy applications to the cloud this year.
  • IBM and solutions provider Persistent System are partnering to deploy IBM Cloud Pak offerings within the enterprise segment. Persistent Systems also launched a new IBM Cloud Pak deployment practice for migrating and modernizing IBM workloads within cloud environments.
  • Distributor Ingram Micro Cloud rolled out the Illuminate program for AWS Partner Network resellers. The Illuminate program provides partner enablement in the forms of coaching, marketing, sales and technical resources.
  • US Signal, a data center services provider based in Grand Rapids, Mich., said it will expand its cloud and data protection capabilities to include data centers in Oak Brook, Ill., and Indianapolis. The company already offers cloud and data protection in its Grand Rapids, Mich.; Southfield, Mich.; and Detroit data centers. The expanded services are scheduled for availability in July at the Oak Brook data center and in September at the Indianapolis facility.
  • ActivTrak Inc., a workforce productivity and analytics software company based in Austin, Texas, unveiled its Managed Service Provider Partner Program. The initial group of more than 25 partners, which span North America, South America, Europe and Asia, include Advanced Technology Group, Cloud Synergistics, Cyber Secure, EMD, NST, Nukke, Wahaya and Zinia. The three-tier program offers access to a single pane-of-glass management console, MSP Command Center. The console lets partners log into customer accounts through single sign-on, investigate and address alerts, configure the application and troubleshoot issues within individual accounts, according to the company.
  • Tanium, a unified endpoint management and security firm based in Emeryville, Calif., formally rolled out its Tanium Partner Advantage program. The program launch follows partnership announcements with NTT, Cloudflare, Okta and vArmour.
  • Nerdio, a Chicago-based company that provides deployment and management offerings for MSPs, expanded its EMEA presence. The company launched a partnership with Sepago, an IT management consultancy based in Germany. Nerdio also appointed Bas van Kaam as its field CTO for Europe, the Middle East and Africa.
  • DLT and its parent company, distributor Tech Data, launched an online forum, GovDevSecOpsHub, which focuses on cybersecurity and the application development process in the public sector.
  • Kimble Applications, a Boston-based professional services automation company, appointed Steve Sharp as its chief operations and finance officer.
  • High Wire Networks, a cybersecurity service provider based in Batavia, Ill., named Travis Ray as its director of channel sales. Ray will look to build alliances with MSPs around delivering High Wire’s Overwatch Managed Security Platform as a Service offering, the company said.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

Microsoft replaces dozens of journalists with AI system

Microsoft is replacing dozens of contract journalists with AI systems, in a move to save money and streamline content curation, but which could also lead to more inappropriate or lackluster content appearing on Microsoft’s sites.

“By favoring machines over humans, Microsoft runs the risk that all kinds of things might go wrong,” said Dan Kennedy, associate professor of journalism at Northeastern University and author of the Media Nation blog.

AI in journalism

The tech giant currently employs full-time staff as well as contract news producers to help curate and edit homepage news on its Microsoft News platform and Microsoft Edge browser. Their duties, according to LinkedIn job descriptions, include cycling relevant news content, editing the content and pairing images with articles.

While Microsoft plans to keep its full-time staff for now, some 50 contract journalists will not have their contracts renewed at the end of the month, according to the Seattle Times.

Microsoft said in a May 29 statement it is not making the move to AI in journalism as a result of the COVID-19 pandemic.

“Like all companies, we evaluate our business on a regular basis,” Microsoft said. “This can result in increased investment in some places and, from time to time, re-deployment in others.”

By favoring machines over humans, Microsoft runs the risk that all kinds of things might go wrong.
Dan KennedyAssociate professor of journalism, Northeastern University

Using AI for content curation isn’t new. Many social media, video and news platforms have been using AI to recommend content or remove inappropriate content for years.

News organizations, including the Washington Post and the Associated Press, have used AI to produce content quickly and inexpensively. Largely, that content is simple, such as a roundup of the latest scores in sport games. Other news organizations, including the New York Times, use AI to augment staff efforts, such as automatically providing research or identifying headlines and key phrases.

Risky business

Even so, AI isn’t advanced enough yet to handle the duties of human employees at the same skill level, and Microsoft is making a risky move by replacing so many employees, analysts said.

“Certainly there is a risk of badly formatted and incorrect content being produced, but a larger concern may be dull content,” said Alan Pelz-Sharpe, founder of market advisory and research firm Deep Analysis.

AI in journalism
Types of marketing and news content AI can produce

Readers are discerning, but journalists know how to draw in readers to even the dullest of topics, he continued. However, “that’s not a strong point of AI,” he said.

“Indeed, even the best AI-driven content is fairly easy to identify and even for readers not conversant with the nuances, they will not engage to the same degree with AI-driven content.,” Pelz-Sharpe said.

Nonetheless, he pointed out, AI does work well for summarizing facts, for ”’reporting’ that is simply ‘reporting.'”

To Nick McQuire, senior vice president and head of AI and enterprise research at CCS Insight, Microsoft’s move comes as somewhat of a surprise, given that Microsoft’s emphasis on responsibility in AI.

“One of their most important [principles around AI technology] is accountability, which means humans must have some oversight and accountability in the deployment of AI,” McQuire said.

“In this respect, I expect Microsoft to still have human oversight around the technology as per their standard governance procedures for AI operations,” he continued.

Microsoft’s AI governance policies are overseen by the vendor’s AI and Ethics in Engineering and Research committee, an advisory board that provides recommendations to senior leadership on responsible AI, including issues such as AI bias, regulations, safety and fairness, as well as human-AI collaboration.

Not a revolution yet

Still, Microsoft’s decision to end the employment of dozens of staff doesn’t mark a revolution for AI in journalism, said Pelz-Sharpe. Rather, it should be viewed as an incremental step.

Pointing out how other news organizations use AI, Pelz-Sharpe said that “enthusiasts like to say that AI will free reporters from drudge work so that they can report and write higher-value stories.’

But, he cautioned, “cost-cutting corporate chains are going to be tempted to use AI to replace reporters.”

And more use of AI won’t have an immediate impact on the journalism industry, but rather a cumulative one, Kennedy said.

“Lower paid entry-level jobs disappear and are automated, reducing the intake of new journalists and making the sector less attractive,” Kennedy said.  “Those jobs will likely never come back — the end result is fewer people in the industry.”

Go to Original Article
Author:

Surge in digital health tools to continue post-pandemic

Health systems have rapidly rolled out digital health tools to meet the needs of both patients and providers during the COVID-19 crisis.

Interest in digital health tools, a broad term that refers to the use of technology to deliver healthcare services to patients digitally and can include technologies such as wearable devices, mobile apps and telehealth programs, will likely continue long after the pandemic ends, according to healthcare experts.

Already, healthcare systems are increasing the number of telehealth services they provide. They are embracing symptom checker tools and tools that enable practitioners to keep tabs on patients remotely. It’s also resulted in healthcare CIOs looking to contact tracing tools for managing the spread of the virus.

During a recent HIMSS webinar, four healthcare leaders discussed how the pandemic has accelerated the adoption of digital health tools and why that interest will continue after the pandemic ends.

Digital health tools help with response

Digital health tools such as telehealth programs have become a crucial element of the pandemic, especially as governments and health systems began mandating work-from-home and shelter-in-place orders, according to Bernardo Mariano Jr., CIO and director of digital health innovation at the World Health Organization in Switzerland.

Bernardo MarianoBernardo Mariano

But, Mariano said, more work needs to be done, including the development of an international health data exchange standard so countries can do a better job of supporting each other during a crisis such as COVID-19. For example, Mariano said, while Italy was suffering from an overload of patients at hospitals, neighboring countries may have been able to help treat patients remotely through telemedicine. The lack of an international “principle or regulation” hindered that capability, he said.

As the pandemic stretches on, Mariano said the proliferation of contact tracing technologies is also growing, with countries seeking to use the technology as part of their reopening strategies. Mariano said the COVID-19 crisis could accelerate the adoption of a global healthcare surveillance system like contact tracing that will enable countries to quickly analyze, assess and respond to outbreaks.

“The power of digital solutions to minimize the impact of COVID has never been so clear,” he said.

‘Digital front door technologies’ are key

Pravene Nath, global head of digital health strategy at Roche, a biotech company with an office in San Francisco, also cited the explosive growth of telehealth as an indicator of the impact COVID-19 has had on healthcare. While they are instrumental now, Nath also believes digital health tools will last beyond the pandemic.

Pravene NathPravene Nath

Nath said the crisis is enabling healthcare systems to readily make a case for “digital front door technologies,” or tools that guide patients to the right place before stepping into a healthcare facility. A digital front door can include tools such as acute care telehealth, chatbot assessments, virtual visits, home monitoring and self-monitoring tools.

“I think the disruption here is in the access and utilization of traditional care models that’s heightened the value of digitally-driven chronic disease care management, such as platforms like MySugr for diabetes management,” he said. MySugr is an app-based digital diabetes management platform that integrates with glucose-monitoring devices.

“We think the adoption of these kinds of technologies will accelerate now as a result of the total disruption to physical access to traditional healthcare environments,” he said.

Nath said after the pandemic has passed, healthcare systems that quickly rolled out digital health technologies will need time to assess how to be “good stewards” of that technology and patient data moving forward.

Mobile app use grows

“Digital technologies play an important role in managing the crisis,” said Päivi Sillanaukee, director general of the Finland Ministry of Social Affairs and Health.

Päivi Sillanaukee Päivi Sillanaukee

Digital health has played a role in keeping patients informed via mobile apps and other online methods. Sillanaukee said by having tools that provide reliable, up-to-date information to patients has resulted in a decrease in time-consuming calls to healthcare workers.

Finland has also begun looking into contact tracing tools, although Sillanaukee said she has seen an acceleration in discussions about patient data safety along with the contact tracing discussion.

Pandemic bypasses change management

While the benefits of digital health were evident before the crisis, such as remotely connecting patients to doctors, Benedict Tan, group chief digital strategy officer at Singapore Health Services, said the challenge has long been change management and getting buy-in from providers for digital health tools.

Benedict TanBenedict Tan

But COVID-19 and social distancing have changed that, suddenly presenting a need for tools such as telehealth, analytics and remote monitoring to help manage patients during the crisis, and they are showing the value of such tools, he said.

“What COVID-19 has done is accelerate, or give motivation, for all of us to work together to leverage and see the benefits of what digital health can bring to society,” he said.

Go to Original Article
Author:

When considering responsible AI, begin with the who

At the 2005 Conference on Neural Information Processing Systems, researcher Hanna Wallach found herself in a unique position—sharing a hotel room with another woman. Actually, three other women to be exact. In the previous years she had attended, that had never been an option because she didn’t really know any other women in machine learning. The group was amazed that there were four of them, among a handful of other women, in attendance. In that moment, it became clear what needed to be done. The next year, Wallach and two other women in the group, Jennifer Wortman Vaughan and Lisa Wainer, founded the Women in Machine Learning (WiML) Workshop. The one-day technical event, which is celebrating its 15th year, provides a forum for women to present their work and seek out professional advice and mentorship opportunities. Additionally, the workshop aims to elevate the contributions of female ML researchers and encourage other women to enter the field. In its first year, the workshop brought together 100 attendees; today, it draws around a thousand.

In creating WiML, the women had tapped into something greater than connecting female ML researchers; they asked whether their machine learning community was behaving fairly in its inclusion and support of women. Wallach and Wortman Vaughan are now colleagues at Microsoft Research, and they’re channeling the same awareness and critical eye to the larger AI picture: Are the systems we’re developing and deploying behaving fairly, and are we properly supporting the people building and using them?

Senior Principal Researchers Jennifer Wortman Vaughan (left) and Hanna Wallach (right), co-founders of the Women in Machine Learning Workshop, bring a people-first approach to their work in responsible AI. The two have co-authored upward of 10 papers together on the topic, and they each co-chair an AI, Ethics, and Effects in Engineering and Research (Aether) working group at Microsoft.

Wallach and Wortman Vaughan each co-chair an AI, Ethics, and Effects in Engineering and Research (Aether) working group—Wallach’s group is focused on fairness, Wortman Vaughan’s on interpretability. In those roles, they help inform Microsoft’s approach to responsible AI, which includes helping developers adopt responsible AI practices with services like Azure Machine Learning. Wallach and Wortman Vaughan have co-authored upward of 10 papers together around the topic of responsible AI. Their two most recent publications in the space address the AI challenges of fairness and interpretability through the lens of one particular group of people involved in the life cycle of AI systems: those developing them.

“It’s common to think of machine learning as a fully automated process,” says Wortman Vaughan. “But people are involved behind the scenes at every step, making decisions about which data to use, what to optimize for, even which problems to solve in the first place, and each of these decisions has the potential to impact lives. How do we empower the people involved in creating machine learning systems to make the best choices?”

Their findings are presented in “Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI” and “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning.” The publications received ACM CHI Conference on Human Factors in Computing Systems (CHI 2020) best paper recognition and honorable mention, respectively.

A framework for thinking about and prioritizing fairness

When Wallach took the lead on the Aether Fairness working group, she found herself getting the same question from industry colleagues, researchers in academia, and people in the nonprofit sector: Why don’t you just build a software tool that can be integrated into systems to identify issues of unfairness? Press a button, make systems fair. Some people asked in jest; others more seriously. Given the subjective and sociotechnical nature of fairness, there couldn’t be a single tool to address every challenge, and she’d say as much. Underlying the question, though, was a very real truth: Practitioners needed help. During a two-hour car ride while on vacation, Wallach had an aha moment listening to a Hidden Brain podcast episode about checklists. What practitioners wanted was a framework to help them think about and prioritize fairness.

“I’m getting this question primarily from people who work in the technology industry; the main way they know how to ask for structure is to ask for software,” she recalls thinking of the requests for a one-size-fits-all fairness tool. “But what they actually want is a framework.”

Wallach, Wortman Vaughan, Postdoctoral Researcher Luke Stark, and PhD candidate Michael A. Madaio, an intern at the time of the work, set out to determine if a checklist could work in this space, what should be on it, and what kind of support teams wanted in adopting one. The result is a comprehensive and customizable checklist that accounts for the real-life workflows of practitioners, with guidelines and discussion points for six stages of AI development and deployment: envision, define, prototype, build, launch, and evolve.

During the first of two sets of workshops, researchers presented participants with an initial AI fairness checklist culled from existing lists, literature, and knowledge of fairness challenges faced by practitioners. Participants were asked to give item-level feedback using sticky notes and colored dots to indicate edits and difficulty level of accomplishing list items, respectively. The researchers used the input to revise the checklist.

Co-designing is key

AI ethics checklists and principles aren’t new, but in their research, Wallach, Wortman Vaughan, and their team found current guidelines are challenging to execute. Many are too broad, oversimplify complex issues with yes/no–style items, and—most importantly—often appear not to have included practitioners in their design. Which is why co-designing the checklist with people currently on the ground developing AI systems formed the basis of the group’s work.

The researchers conducted semi-structured interviews exploring practitioners’ current approaches to addressing fairness issues and their vision of the ideal checklist. Separately, Wallach, Wortman Vaughan, and others in the Aether Fairness working group had built out a starter checklist culled from existing lists and literature, as well as their own knowledge of fairness challenges faced by practitioners. The researchers presented this initial checklist during two sets of workshops, revising the list after each based on participant input regarding the specific items included. Additionally, the researchers gathered information on anticipated obstacles and best-case scenarios for incorporating such a checklist into workflows, using the feedback, along with that from the semi-structured interviews, to finalize the list. When all was said and done, 48 practitioners from 12 tech companies had contributed to the design of the checklist.

During the process, researchers found that fairness efforts were often led by passionate individuals who felt they were on their own to balance “doing the right thing” with production goals. Participants expressed hope that having an appropriate checklist could empower individuals, support a proactive approach to AI ethics, and help foster a top-down strategy for managing fairness concerns across their companies.

A conversation starter

While offering step-by-step guidance, the checklist is not about rote compliance, says Wallach, and intentionally omits thresholds, specific criteria, and other measures that might encourage teams to blindly check boxes without deeper engagement. Instead, the items in each stage of the checklist are designed to facilitate important conversations, providing an opportunity to express and explore concerns, evaluate systems, and adjust them accordingly at natural points in the workflow. The checklist is a “thought infrastructure”—as Wallach calls it—that can be customized to meet the specific and varying needs of different teams and circumstances.

During their co-design workshops, researchers used a series of storyboards based on participant feedback to further understand the challenges and opportunities involved in incorporating AI fairness checklists into workflows.

And just as the researchers don’t foresee a single tool solving all fairness challenges, they don’t view the checklist as a solo solution. The checklist is meant to be used alongside other methods and resources, they say, including software tools like Fairlearn, the current release of which is being demoed this week at the developer event Microsoft Build. Fairlearn is an open-source Python package that includes a dashboard and algorithms to support practitioners in assessing and mitigating unfairness in two specific scenarios: disparities in the allocation of opportunities, resources, and information offered by their AI systems and disparities in system performance. Before Fairlearn can help with such disparities, though, practitioners have to identify the groups of people they expect to be impacted by their specific system.

The hope is the checklist—with such guidance as “solicit input on system vision and potential fairness-related harms from diverse perspectives”—will aid practitioners in making such determinations and encourage other important conversations.

“We can’t tell you exactly who might be harmed by your particular system and in what way,” says Wallach. “But we definitely know that if you didn’t have a conversation about this as a team and really investigate this, you’re definitely doing it wrong.”

Tackling the challenges of interpreting interpretability

As with fairness, there are no easy answers—and just as many complex questions—when it comes to interpretability.

Wortman Vaughan recalls attending a panel discussion on AI and society in 2016 during which one of the panelists described a future in which AI systems were so advanced that they would remove uncertainty from decision-making. She was confounded and angered by what she perceived as a misleading and irresponsible statement. The uncertainty inherent in the world is baked into any AI systems we build, whether it’s explicit or not, she thought. The panelist’s comment weighed on her mind and was magnified further by current events at the time. The idea of “democratizing AI” was gaining steam, and models were forecasting a Hillary Rodham Clinton presidency, an output many were treating as a done deal. She wondered to the point of obsession, how well do people really understand the predictions coming out of AI systems? A dive into the literature on the ML community’s efforts to make machine learning interpretable was far from reassuring.

“I got really hung up on the fact that people were designing these methods without stopping to define exactly what they mean by interpretability or intelligibility, basically proposing solutions without first defining the problem they were trying to solve,” says Wortman Vaughan.

That definition rests largely on who’s doing the interpreting. To illustrate, Wallach provides the example of a machine learning model that determines loan eligibility: Details regarding the model’s mathematical equations would go a long way in helping an ML researcher understand how the model arrives at its decisions or if it has any bugs. Those same details mean little to nothing, though, to applicants whose goal is to understand why they were denied a loan and what changes they need to make to position themselves for approval.

In their work, Wallach and Wortman Vaughan have argued for a more expansive view of interpretability, one that recognizes that the concept “means different things to different people depending on who they are and what they’re trying to do,” says Wallach.

As ML models continue to be deployed in the financial sector and other critical domains like healthcare and the justice system—where they can significantly affect people’s livelihood and well-being—claiming ignorance of how an AI system works is not an option. While the ML community has responded to this increasing need for techniques that help show how AI systems function, there’s a severe lack of information on the effectiveness of these tools—and there’s a reason for that.

“User studies of interpretability are notoriously challenging to get right,” explains Wortman Vaughan. “Doing these studies is a research agenda of its own.”

Not only does designing such a study entail qualitative and quantitative methods, but it also requires an interdisciplinary mix of expertise in machine learning, including the mathematics underlying ML models, and human–computer interaction (HCI), as well as knowledge of both the academic literature and routine data science practices.

The enormity of the undertaking is reflected in the makeup of the team that came together for the “Interpreting Interpretability” paper. Wallach, Wortman Vaughan, and Senior Principal Researcher Rich Caruana have extensive ML experience; PhD student Harmanpreet Kaur, an intern at the time of the work, has a research focus in HCI; and Harsha Nori and Samuel Jenkins are data scientists who have practical experience building and using interpretability tools. Together, they investigated whether current tools for increasing the interpretability of models actually result in more understandable systems for the data scientists and developers using them.

Three visualization types for model evaluation are output by the popular and publicly available InterpretML implementation of GAMs (top) and the implementation of SHAP in the SHAP Python package (bottom), respectively. Left column: global explanations. Middle column: component (GAMs) or dependence plot (SHAP). Right column: local explanations.

Tools in practice

The study focuses on two popular and publicly available tools, each representative of one of two techniques dominating the space: the InterpretML implementation of GAMs, which uses a “glassbox model” approach, by which models are designed to be simple enough to understand, and the implementation of SHAP in the SHAP Python package, which uses a post-hoc explanation approach for complex models. Each tool outputs three visualization types for model evaluation.

Through pilot interviews with practitioners, the researchers identified six routine challenges that data scientists face in their day-to-day work. The researchers then set up an interview study in which they placed data scientists in context with data, a model, and one of the two tools, assigned randomly. They examined how well 11 practitioners were able to use the interpretability tool to uncover and address the routine challenges.

The researchers found participants lacked an overall understanding of the tools, particularly in reading and drawing conclusions from the visualizations, which contained importance scores and other values that weren’t explicitly explained, causing confusion. Despite this, the researchers observed, participants were inclined to trust the tools. Some came to rely on the visualizations to justify questionable outputs—the existence of the visualizations offering enough proof of the tools’ credibility—as opposed to using them to scrutinize model performance. The tools’ public availability and widespread use also contributed to participants’ confidence in the tools, with one participant pointing to its availability as an indication that it “must be doing something right.”

Following the interview study, the researchers surveyed nearly 200 practitioners, who were asked to participate in an adjusted version of the interview study task. The purpose was to scale up the findings and gain a sense of their overall perception and use of the tools. The survey largely supported participants’ difficulty in understanding the visualizations and their superficial use of them found in the interview study, but also revealed a path for future work around tutorials and interactive features to support practitioners in using the tools.

“Our next step is to explore ways of helping data scientists form the right mental models so that they can take advantage of the full potential of these tools,” says Wortman Vaughan.

The researchers conclude that as the interpretability landscape continues to evolve, studies of the extent to which interpretability tools are achieving their intended goals and practitioners’ use and perception of them will continue to be important in improving the tools themselves and supporting practitioners in productively using them.

Putting people first

Fairness and interpretability aren’t static, objective concepts. Because their definitions hinge on people and their unique circumstances, fairness and interpretability will always be changing. For Wallach and Wortman Vaughan, being responsible creators of AI begins and ends with people, with the who: Who is building the AI systems? Who do these systems take power from and give power to? Who is using these systems and why? In their fairness checklist and interpretability tools papers, they and their co-authors look specifically at those developing AI systems, determining that practitioners need to be involved in the development of the tools and resources designed to help them in their work.

By putting people first, Wallach and Wortman Vaughan contribute to a support network that includes resources and also reinforcements for using those resources, whether that be in the form of a community of likeminded individuals like in WiML, a comprehensive checklist for sparking dialogue that will hopefully result in more trustworthy systems, or feedback from teams on the ground to help ensure tools deliver on their promise of helping to make responsible AI achievable.

Go to Original Article
Author: Microsoft News Center

7 PowerShell courses to help hone skills for all levels of expertise

PowerShell can be one of the most effective tools administrators have for managing Windows systems. But it can be difficult to master, especially when time is limited. An online PowerShell course can expedite this process by prioritizing the most important topics and presenting them in logical order.

Admins have plenty of PowerShell courses from which to choose, offered by well-established vendors. But with so many courses available, it isn’t always clear which ones will be the most beneficial. To help make the course selection process easier, here we offer a sampling of popular PowerShell courses that cater to varying levels of experience.

Windows currently ships with PowerShell 5.1, but PowerShell Core 6 is available for download, and PowerShell 7 is in preview. PowerShell Core is a cross-platform version of PowerShell that runs on multiple OS platforms. It isn’t an upgrade to Windows PowerShell, but a separate application that runs on the same system.

Some of the PowerShell courses listed here, as well as other online classes, specify the PowerShell version on which the course is based. But not all classes offer this information, and some courses provide only a range, such as PowerShell 4 or later. So, before signing up for an online course, be sure to verify the PowerShell version.

Learning Windows PowerShell

This popular PowerShell tutorial from Udemy is designed for beginners. This course targets systems admins who have no prior PowerShell experience but want to use PowerShell to manage Windows desktops and servers. This course is based on PowerShell 5. But this shouldn’t be an issue when learning basic concepts, which is the primary focus of this PowerShell tutorial.

Admins have plenty of PowerShell courses from which to choose, offered by well-established vendors.

The course provides background information about PowerShell and explains how to set up the PowerShell environment, including how to configure the console and work with profiles. The course introduces cmdlets, shows how they’re related to .NET objects and classes, and explains how to build a pipeline using cmdlets and other language elements. With this information, systems admins will have the basics they need to move onto the next topic: PowerShell scripts.

The tutorial on scripting is nearly as extensive as the section on cmdlets. The course examines the details of script elements, such as variables, constants, comparison operators, if statements, looping structures and regular expressions. This is followed by details on PowerShell providers and how to work with files and folders, and then a discussion of administration basics. This course can help provide participants with a solid foundation in PowerShell so they’re ready to take on more advanced topics.

Introduction to Windows PowerShell 5.1

This Udemy tutorial is based on PowerShell 5.1, so it’s more current than the previous course. The training is geared toward both beginner PowerShell users and more experienced admins who want to hone their PowerShell skills. The course covers a wide range of topics, from understanding PowerShell syntax to managing Active Directory (AD). Participants who sign up for this course should already know how to run PowerShell, but they don’t need to be advanced users.

The course covers the basics of how to use both the PowerShell console and the Intelligent Scripting Environment (ISE). It explains what steps to take to get help and find commands. This is followed by an in-depth look at the PowerShell command syntax. The material also covers objects and their properties and methods, as well as an explanation of how to build a PowerShell pipeline.

Participants can move onto the section on scripting, which starts with a discussion on arrays and variables. Users then learn how to build looping structures and conditional statements, and how to use PowerShell functions. This course demonstrates how to use PowerShell to work with AD, covering such tasks as installing and configuring server roles.

PowerShell version 5.1 and 6: Step-by-Step

This tutorial, which is one of Udemy’s highest rated PowerShell courses, is geared toward admins who want to learn how to use PowerShell to perform management tasks. The course is broad in scope and covers both PowerShell 5.1 and PowerShell Core 6. Users who sign up for this course should have a basic understanding of the Windows OS — both desktop and server versions.

Because the course covers so many topics, it’s longer than the previous two training sessions and goes into more detail. It explains the differences between PowerShell and the Windows Command Prompt, how to determine the PowerShell version and how to work with aliases. The course also examines the steps necessary to run unsupported commands and create PowerShell transcripts.

This PowerShell tutorial also examines more advanced topics, such as working with object members, creating hash tables and managing execution policy levels. This is followed by a detailed discussion about the Common Information Model (CIM) and how it can manage hard drives and work with BIOS. In addition, participants will learn how to create profile scripts, functions and modules, as well as how to use script parameters and to pause script execution. Because the course is so comprehensive, admins should come away with a solid understanding of how to use PowerShell to script their daily management tasks.

Udemy course pricing

Udemy distinguishes between personal and business users. For personal users, Udemy charges by the course, with prices for PowerShell courses ranging between $25 and $200. Udemy also offers personal users a 30-day, money-back guarantee.

Udemy also offers two business plans that provide unlimited access to its courses. The Team plan supports between five and 20 users and costs $240 per user, per year. It also comes with a 14-day trial. Contact Udemy for details regarding its Enterprise plan, which supports 21 or more users. Udemy also offers courses to help users prepare for IT certifications, supporting such programs as Cisco CCNA, Oracle Certification and Microsoft Certification.

Windows PowerShell: Essentials

Pluralsight offers a variety of PowerShell courses, as well as learning paths. A path is a series of related courses that provide users with a strategy for learning a specific technology. This path includes six courses ranging from beginner to advanced user. Participants should come away with a strong foundation in how to create PowerShell scripts that automate administrative processes. Before embarking on this path, however, they should have a basic understanding of Windows networking and troubleshooting.

The beginning courses on this path provide users with the information they need to start working with PowerShell, even if they’re first-timers. Users will learn how to use cmdlets, work with objects and get help when they need it. These courses also introduce concepts such as aliases, providers and mapping network drives. The intermediate tutorials build on the beginning courses by explaining how to work with objects and the PowerShell pipeline, and how to format output. The intermediate courses also focus on using PowerShell in a networked environment, covering such topics as CIM and Windows Management Instrumentation.

The advanced courses build on the beginning and intermediate tutorials by focusing on automation scripts. Admins will learn how to use PowerShell scripting to automate their routine processes and tasks. They’ll also learn how to troubleshoot problems in their scripts if PowerShell exhibits unusual behavior. The path approach might not be for everyone, but for those ready to invest their time in a comprehensive program, this path could prove a valuable resource.

Practical Desired State Configuration

Those not suited to a learning path can choose from a variety of other Pluralsight courses that address specific technologies. This highly rated course caters to advanced users and provides real-world examples of how to use PowerShell to write Desired State Configurations (DSCs). Those interested in the course should be familiar with PowerShell and DSC principles.

DSC refers to a new way of managing Windows Server that shifts the focus from point-and-click GUIs to infrastructure as code. To achieve this, admins can use PowerShell to build DSCs. This process is the focus of this course, which covers several advanced topics ranging from writing configurations with custom resources to building dynamic collector configurations.

The tutorial demonstrates how to use custom resources in a configuration and offers an in-depth discussion of securing DSC operations. Participants then learn how to use the DSC model to configure and manage AD, covering such topics as building domains and creating users and groups. The course demonstrates how to set up Windows event forwarding. Although not everyone is looking for such advanced topics, for some users, this course might be just what they need to progress their PowerShell skills.

Pluralsight pricing

Pluralsight doesn’t charge by the course, but rather it offers three personal plans and two business plans. The personal plans start at $299 per year, and the business plans start at $579 per user, per year. All plans include access to the entire course library. In addition, Pluralsight offers a 10-day personal free trial and, like Udemy, courses geared toward IT certification.

PowerShell 5 Essential Training

Of the 13 online PowerShell courses offered by LinkedIn Learning — formerly, Lynda.com — this is the most popular. The course targets beginner and intermediate PowerShell users who are Windows systems admins. Although the course is based on PowerShell 5, the basic information is still applicable today, like other courseware written to this version.

The material covers most of the basics one would expect from a course at this level. It explains how to set up and customize PowerShell, and it introduces admins to cmdlets and their syntax and how to find help. This is followed by installing modules and packages. The course also describes how to use the PowerShell pipeline, covering such topics as working with files and printers, as well as storing data as a webpage.

The course moves onto objects and their properties and methods. Participants can learn how to create scripts that incorporate variables and parameters so they can automate administrative tasks. Participants are also introduced to PowerShell ISE and shown how to use PowerShell remoting to manage multiple systems at once, along with practical examples of administrative operations at scale.

PowerShell: Scripting for Advanced Automation

This course, which is also offered by LinkedIn Learning, focuses on automating advanced administrative operations in a Windows network. Those planning to take the course should have a strong foundation in managing Windows environments. As its name suggests, the course is geared toward advanced users.

After a brief introduction, the course jumps into DSC automation, providing an overview of DSC and explaining how to set up DSCs. Users can learn how to work with DSC resources, push DSCs and create pull configurations. The course then moves onto Just Enough Administration, explaining JEA concepts and best practices. In this part of the course, participants learn how to create role capability files and JEA session configurations, as well as how to register JEA endpoints.

The final section of the tutorial describes how to troubleshoot PowerShell scripts. The discussion begins with an overview of PowerShell workflows and examines the specifics of troubleshooting PowerShell in both the console and ISE. The section ends with information about using the PSScriptAnalyzer tool for quality control. As with any advanced course, not all users will benefit from this information. But the tutorial could provide a valuable resource for admins looking to refine their PowerShell skills.

LinkedIn Learning pricing

LinkedIn Learning sells courses individually, offers a one-month free trial and provides both personal and business plans. Individual PowerShell courses cost between $30 and $45, and individual subscription plans start at $20 per month. Contact LinkedIn Learning regarding business plans. LinkedIn Learning also offers courses aimed at IT certifications.

Go to Original Article
Author:

What are Windows virtualization-based security features?

Windows administrators must maintain constant vigilance over their systems to prevent a vulnerability from crippling their systems or exposing data to threat actors. For shops that use Hyper-V, Microsoft offers another layer of protection through its virtualization-based security.

Virtualization-based security uses Hyper-V and the machine’s hardware virtualization features to isolate and protect an area of system memory that runs the most sensitive and critical parts of the OS kernel and user modes. Once deployed, these protected areas can guard other kernel and user-mode instances.

Virtualization-based security effectively reduces the Windows attack surface, so even if a malicious actor gains access to the OS kernel, the protected content can prevent code execution and the access of secrets, such as system credentials. In theory, these added protections would prevent malware attacks that use kernel exploits from gaining access to sensitive information.

Code examining, malware prevention among key capabilities

Virtualization-based security is a foundation technology and must be in place before adopting a range of advanced security features in Windows Server. One example is Hypervisor-Enforced Code Integrity (HVCI), which examines code — such as drivers — and ensures the kernel mode drivers and binaries are signed before they load into memory. Unsigned content gets denied, reducing the possibility of running malicious code.

Other advanced security capabilities that rely on virtualization-based security include Windows Defender Credential Guard, which prevents malware from accessing credentials, and the ability to create virtual trusted platform modules (TPMs) for shielded VMs.

In Windows Server 2019, Microsoft expanded its shielded VMs feature beyond the Windows platform to cover Linux workloads running on Hyper-V to prevent data leakage when the VM is both static and when it moves to another Hyper-V host.

New in Windows Server 2019 is a feature called host key attestation, which uses asymmetric key pairs to authenticate hosts covered by the Host Guardian Service in what is described as an easier deployment method by not requiring an Active Directory trust arrangement.

What are the virtualization-based security requirements?

Virtualization-based security has numerous requirements. It’s important to investigate the complete set of hardware, firmware and software requirements before adopting virtualization-based security. Any missing requirements may make it impossible to enable virtualization-based security and compromise system security features that depend on virtualization-based security support.

At the hardware level, virtualization-based security needs a 64-bit processor with virtualization extensions (Intel VT-x and AMD-V) and second-level address translation as Extended Page Tables or Rapid Virtualization Indexing. I/O virtualization must be supported through Intel VT-d or AMD-Vi. The server hardware must include TPM 2.0 or better.

System firmware must support the Windows System Management Mode Security Mitigations Table specification. Unified Extensible Firmware Interface must support memory reporting features such as the UEFI v2.6 Memory Attributes Table. Support for Secure Memory Overwrite Request v2 will inhibit in-memory attacks. All drivers must be compatible with HVCI standards.

Go to Original Article
Author:

Using wsusscn2.cab to find missing Windows updates

Keeping your Windows Server and Windows desktop systems updated can be tricky, and finding missing patches in conventional ways might not be reliable.

There are a few reasons why important security patches might not get installed. They could be mistakenly declined in Windows Server Update Services or get overlooked in environments that a lack an internet connection.

Microsoft provides a Windows Update offline scan file, also known as wsusscn2.cab, to help you check Windows systems for missing updates. The CAB file contains information about most patches for Windows and Microsoft applications distributed through Windows Update.

The challenge with the wsusscn2.cab file is its size. It weighs in around 650 MB, and distributing it to all the servers to perform a scan can be tricky and time-consuming. This tutorial explains how to avoid those issues and run it on all of your servers in a secure and timely manner using IIS for file transfer instead of SMB or PowerShell sessions.

Requirements for offline scanning

There are some simple requirements to use this tutorial:

  • a server or PC running Windows Server 2012 or newer or Windows 10;
  • a domain account with local administrator on the servers you want to scan; and
  • PowerShell remoting enabled on the servers you want to scan.

Step 1. Install IIS

First, we need a web server we can use to distribute the wsusscn2.cab file. There are several ways to copy the file, but they all have different drawbacks.

For example, we could distribute the wsusscn2.cab file with a regular file share, but that requires a double-hop. You could also copy the wsusscn2.cab file over a PowerShell session, but that causes a lot of overhead and is extremely slow for large files. An easier and more secure way to distribute the file is through HTTP and IIS.

Installing on Windows Server

Start PowerShell as admin and type the following to install IIS:

Install-WindowsFeature -name Web-Server -IncludeManagementTools

Installing on Windows 10

Start PowerShell as an admin and type the following to install IIS:

Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebServer

The IIS role should be installed. The default site will point to the root folder of the C drive.

We can now proceed to download wsusscn2.cab from Microsoft.

Step 2. Download wsusscn2.cab

The link for this file can be tricky to find. You can either download it from this link and save it to the C drive or run the following script as admin on the IIS server:

# Default Site path, change if necessary
$IISFolderPath = "C:inetpubwwwroot"

# Download wsusscn2.cab
Start-BitsTransfer -Source "http://go.microsoft.com/fwlink/?linkid=74689" -Destination "$IISFolderPathwsusscn2.cab"

The script downloads the file to the wwwroot folder. We can verify the download by browsing to http:///wsusscn2.cab.

You also need to get the hash value of wsusscn2.cab to verify it. After saving it, run the following PowerShell command to check the file hash:

(Get-FileHash C:inetpubwwwrootwsusscn2.cab).Hash

31997CD01B8790CA68A02F3A351F812A38639FA49FEC7346E28F7153A8ABBA05

Step 3. Run the check on a server

Next, you can use a PowerShell script to download and scan for missing updates on a PC or server using the wsusscn2.cab file. You can run the script on at least Windows Server 2008 or newer to avoid compatibility issues. To do this in a secure and effective manner over HTTP, we get the file hash of the downloaded wsusscn2.cab file and compare it with the file hash of the CAB file on the IIS server.

We can also use the file hash to see when Microsoft releases a new version of wsusscn2.cab.

Copy and save the following script as Get-MissingUpdates.ps1:

Param(
    [parameter(mandatory)]
    [string]$FileHash,

    [parameter(mandatory)]
    [string]$Wsusscn2Url
)


Function Get-Hash($Path){
    
    $Stream = New-Object System.IO.FileStream($Path,[System.IO.FileMode]::Open) 
    
    $StringBuilder = New-Object System.Text.StringBuilder 
    $HashCreate = [System.Security.Cryptography.HashAlgorithm]::Create("SHA256").ComputeHash($Stream)
    $HashCreate | Foreach {
        $StringBuilder.Append($_.ToString("x2")) | Out-Null
    }
    $Stream.Close() 
    $StringBuilder.ToString() 
}

$DataFolder = "$env:ProgramDataWSUS Offline Catalog"
$CabPath = "$DataFolderwsusscn2.cab"

# Create download dir
mkdir $DataFolder -Force | Out-Null

# Check if cab exists
$CabExists = Test-Path $CabPath


# Compare hashes if download is needed
if($CabExists){
    Write-Verbose "Comparing hashes of wsusscn2.cab"
    
    $HashMatch = $Hash -ne (Get-Hash -Path $CabPath)

    if($HashMatch){   
        Write-Warning "Filehash of $CabPath did not match $($FileHash) - downloading"
        Remove-Item $CabPath -Force
    }
    Else{
        Write-Verbose "Hashes matched"
    }
}

# Download wsus2scn.cab if it dosen't exist or hashes mismatch
if(!$CabExists -or $HashMatch -eq $false){
    Write-Verbose "Downloading wsusscn2.cab"
    # Works on Windows Server 2008 as well
    (New-Object System.Net.WebClient).DownloadFile($Wsusscn2Url, $CabPath)

    if($Hash -ne (Get-Hash -Path $CabPath)){
        Throw "$CabPath did not match $($FileHash)"
    }

}

Write-Verbose "Checking digital signature of wsusscn2.cab"

$CertificateIssuer = "CN=Microsoft Code Signing PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US"
$Signature = Get-AuthenticodeSignature -FilePath $CabPath
$SignatureOk = $Signature.SignerCertificate.Issuer -eq $CertificateIssuer -and $Signature.Status -eq "Valid"


If(!$SignatureOk){
    Throw "Signature of wsusscn2.cab is invalid!"
}


Write-Verbose "Creating Windows Update session"
$UpdateSession = New-Object -ComObject Microsoft.Update.Session
$UpdateServiceManager  = New-Object -ComObject Microsoft.Update.ServiceManager 

$UpdateService = $UpdateServiceManager.AddScanPackageService("Offline Sync Service", $CabPath, 1) 

Write-Verbose "Creating Windows Update Searcher"
$UpdateSearcher = $UpdateSession.CreateUpdateSearcher()  
$UpdateSearcher.ServerSelection = 3
$UpdateSearcher.ServiceID = $UpdateService.ServiceID.ToString()
 
Write-Verbose "Searching for missing updates"
$SearchResult = $UpdateSearcher.Search("IsInstalled=0")

$Updates = $SearchResult.Updates

$UpdateSummary = [PSCustomObject]@{

    ComputerName = $env:COMPUTERNAME    
    MissingUpdatesCount = $Updates.Count
    Vulnerabilities = $Updates | Foreach {
        $_.CveIDs
    }
    MissingUpdates = $Updates | Select Title, MsrcSeverity, @{Name="KBArticleIDs";Expression={$_.KBArticleIDs}}
}

Return $UpdateSummary

Run the script on one of the servers of computers to check for missing updates. To do this, copy the script to the machine and run the script with the URL to the wsusscn2.cab on the IIS server and the hash value from step two:

PS51> Get-MissingUpdates.ps1 -Wsusscn2Url "http://
  
   /wsusscn2.cab" -FileHash 31997CD01B8790CA68A02F3A351F812A38639FA49FEC7346E28F7153A8ABBA05
  

If there are missing updates, you should see output similar to the following:

ComputerName     MissingUpdatesCount Vulnerabilities  MissingUpdates
------------     ------------------- ---------------  --------------
UNSECURESERVER                    14 {CVE-2006-4685, CVE-2006-4686,
CVE-2019-1079, CVE-2019-1079...} {@{Title=MSXML 6.0 RTM Security Updat

If the machine is not missing updates, then you should see this type of output:

ComputerName MissingUpdatesCount Vulnerabilities MissingUpdates
------------ ------------------- --------------- --------------
SECURESERVER                   0

The script gives a summary of the number of missing updates, what those updates are and the vulnerabilities they patch.

This process is a great deal faster than searching for missing updates online. But this manual method is not efficient when checking a fleet of servers, so let’s learn how to run the script on all systems and collect the output.

Step 4. Run the scanning script on multiple servers at once

The easiest way to collect missing updates from all servers with PowerShell is with a PowerShell job. The PowerShell jobs run in parallel on all computers, and you can fetch the results.

On a PC or server, save the file from the previous step to the C drive — or another directory of your choice — and run the following as a user with admin permissions on your systems:

# The servers you want to collect missing updates from
$Computers = @(
        'server1',
        'server2',
        'server3'
)

# These are the arguments that will be sent to the remote servers
$RemoteArgs = @(
    # File hash from step 2
    "31997CD01B8790CA68A02F3A351F812A38639FA49FEC7346E28F7153A8ABBA05",
    "http://$env:COMPUTERNAME/wsusscn2.cab"
)

$Params = @{
    ComputerName = $Computers
    ArgumentList = $RemoteArgs
    AsJob        = $True
    # Filepath to the script on the server/computer you are running this command on
    FilePath = "C:ScriptsGet-MissingUpdates.ps1"
    # Maximum number of active jobs
    ThrottleLimit = 20
}

$Job = Invoke-Command @Params

# Wait for all jobs to finish
$Job | Wait-Job

# Collect Results from the jobs
$Results = $Job | Receive-Job

# Show results
$Results

This runs the Get-MissingUpdates.ps1 script on all servers in the $Computers variable in parallel to save time and make it easier to collect the results.

You should run these PowerShell jobs regularly to catch servers with a malfunctioning Windows Update and to be sure important updates get installed.

Go to Original Article
Author:

Epicor ERP system focuses on distribution

Many ERP systems try to be all things to all use cases, but that often comes at the expense of heavy customizations.

Some companies are discovering that a purpose-built ERP is a better and more cost-effective bet, particularly for small and midsize companies. One such product is the Epicor ERP system Prophet 21, which is primarily aimed at wholesale distributors.

The functionality in the Epicor ERP system is designed to help distributors run processes more efficiently and make better use of data flowing through the system.

In addition to distribution-focused functions, the Prophet 21 Epicor ERP system includes the ability to integrate value-added services, which could be valuable for distributors, said Mark Jensen, Epicor senior director of product management.

“A distributor can do manufacturing processes for their customers, or rentals, or field service and maintenance work. Those are three areas that we focused on with Prophet 21,” Jensen said.

Prophet 21’s functionality is particularly strong in managing inventory, including picking, packing and shipping goods, as well as receiving and put-away processes.

Specialized functions for distributors

Distribution companies that specialize in certain industries or products have different processes that Prophet 21 includes in its functions, Jensen said. For example, Prophet 21 has functionality designed specifically for tile and slab distributors.

“The ability to be able to work with the slab of granite or a slab of marble — what size it is, how much is left after it’s been cut, transporting that slab of granite or tile — is a very specific functionality, because you’re dealing with various sizes, colors, dimensions,” he said. “Being purpose-built gives [the Epicor ERP system] an advantage over competitors like Oracle, SAP, NetSuite, [which] either have to customize or rely on a third-party vendor to attach that kind of functionality.”

Jergens Industrial Supply, a wholesale supplies distributor based in Cleveland, has improved efficiency and is more responsive to shifting customer demands using Prophet 21, said Tony Filipovic, Jergens Industrial Supply (JIS) operations manager.

We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case.
Tony FilipovicOperations manager, Jergens Industrial Supply

“We like Prophet 21 because it’s geared toward distribution and was the leading product for distribution,” Filipovic said. “We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case. Prophet 21 is something that’s been top of line for years for resources distribution needs.”

One of the key differentiators for JIS was Prophet 21’s inventory management functionality, which was useful because distributors manage inventory differently than manufacturers, Filipovic said.

“All that functionality within that was key, and everything is under one package,” he said. “So from the moment you are quoting or entering an order to purchasing the product, receiving it, billing it, shipping it and paying for it was all streamlined under one system.”

Another key new feature is an IoT-enabled button similar to Amazon Dash buttons that enables customers to resupply stocks remotely. This allows JIS to “stay ahead of the click” and offer customers lower cost and more efficient delivery, Filipovic said.

“Online platforms are becoming more and more prevalent in our industry,” he said. “The Dash button allows customers to find out where we can get into their process and make things easier. We’ve got the ordering at the point where customers realize that when they need to stock, all they do is press the button and it saves multiple hours and days.”

Epicor Prophet 21 a strong contender in purpose-built ERP

Epicor Prophet 21 is on solid ground with its purpose-built ERP focus, but companies have other options they can look at, said Cindy Jutras, president of Mint Jutras, an ERP research and advisory firm in Windham, NH.

“Epicor Prophet 21 is a strong contender from a feature and function standpoint. I’m a fan of solutions that go that last mile for industry-specific functionality, and there aren’t all that many for wholesale distribution,” Jutras said. “Infor is pretty strong, NetSuite plays here, and then there a ton of little guys that aren’t as well-known.”

Prophet 21 may take advantage of new cloud capabilities to compete better in some global markets, said Predrag Jakovljevic, principal analyst at Technology Evaluation Centers, an enterprise computing analysis firm in Montreal.

“Of course a vertically-focused ERP is always advantageous, and Prophet 21 and Infor SX.e go head-to-head all the time in North America,” Jakovljevic said. “Prophet 21 is now getting cloud enabled and will be in Australia and the UK, where it might compete with NetSuite or Infor M3, which are global products.”

Go to Original Article
Author:

Cornell researchers call for AI transparency in automated hiring

Cornell University is becoming a hotbed of warning about automated hiring systems. In two separate papers, researchers have given the systems considerable scrutiny. Both papers cite problems with AI transparency, or the ability to explain how an AI system reaches a conclusion.

Vendors are selling automated hiring systems partly as a remedy to human bias. But they also argue they can speed up the hiring process and select applicants who will make good employees.

Manish Raghavan, a computer science doctoral student at Cornell who led the most recent study, questions vendors’ claims. If AI is doing a better job than hiring managers, “how do we know that’s the case or when will we know that that’s the case?” he said.

A major thrust of the research is the need for AI transparency. That’s not only needed for the buyers of automated hiring systems, but for job applicants as well.

At Cornell, Raghavan knows students who take AI-enabled tests as part of a job application. “One common complaint that I’ve heard is that it just viscerally feels upsetting to have to perform for a robot,” he said.

Manish Raghavan, a doctoral student in computer science at Cornell UniversityManish Raghavan

A job applicant may have to install an app to film a video interview, play a game that may measure cognitive ability or take a psychometric test that can be used to measure intelligence and personality.

“This sort of feels like they’re forcing you [the job applicant] to invest extra effort, but they’re actually investing less effort into you,” Raghavan said. Rejected applicants won’t know why they were rejected, the standards used to measure their performance, or how they can improve, he said.

Nascent research, lack of regulation

The paper, “Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices,” is the work of a multidisciplinary team of computer scientists, as well as those with legal and sociological expertise. It argues that HR vendors are not providing insights into automated hiring systems.

One common complaint that I’ve heard is that it just viscerally feels upsetting to have to perform for a robot.
Manish RaghavanDoctoral student in computer science, AI researcher, Cornell University

The researchers looked at the public claims of nearly 20 vendors that sell these systems. Many are startups, although some have been around for more than a decade. They argue that vendors are taking nascent research and translating it into practice “at sort of breakneck pace,” Raghavan said. They’re able to do so because of a lack of regulation.

Vendors can produce data from automated hiring systems that shows how their systems perform in helping achieve diversity, Raghavan said. “Their diversity numbers are quite good,” but they can cherry-pick what data they release, he said. Nonetheless, “it also feels like there is some value being added here, and their clients seem fairly happy with the results.”

But there are two levels of transparency that Raghavan would like to see improve. First, he suggested vendors release internal studies that show the validity of their assessments. The data should include how often vendors are running into issues of disparate impact, which refers to a U.S. Equal Employment Opportunity Commission formula for determining if hiring is having a discriminatory impact on a protected group.

A second step for AI transparency involves having third-party independent researchers do some of their own analysis.

Vendors argue that AI systems do a better job than humans in reducing bias. But researchers see a risk that they could embed certain biases against a group of people that won’t be easily discovered unless there’s an understanding for how these systems work.

One problem often cited is that an AI-enabled system can help improve diversity but still discriminate against certain groups or people. New York University researchers recently noted that most of the AI code today is being written by young white males, who many encode their biases.

Ask about the ‘magic fairy dust’

Ben Eubanks, principal analyst at Lighthouse Research & Advisory, believes the Cornell paper should be on every HR manager’s reading list, “not necessarily because it should scare them away but because it should encourage them to ask more questions about the magic fairy dust behind some technology claims.”

“Hiring is and has always been full of bias,” said Eubanks, who studies AI use in HR. “Algorithms are subject to some of those same constraints, but they can also offer ways to mitigate some of the very real human bias in the process.”

But the motivation for employers may be different, Eubanks said.

“Employers adopting these technologies may be more concerned initially with the outcomes — be it faster hiring, cheaper hiring, or longer retention rates — than about the algorithm actually preventing or mitigating bias,” Eubanks said. That’s what HR managers will likely be rewarded on.

In a separate paper, Ifeoma Ajunwa, assistant professor of labor relations, law and history at Cornell University, argued for independent audits and compulsory data retention in her recently published “Automated Employment Discrimination.”

Ajunwa’s paper raises problems with automated hiring, including systems that “discreetly eliminate applicants from protected categories without retaining a record.” 

AI transparency adds confidence

Still, in an interview, Cornell’s Raghavan was even-handed about using AI and didn’t warn users away from automated hiring systems. He can see use cases but believes there is good reason for caution.

“I think what we can agree on is that the more transparency there is, the easier it will be for us to determine when is or is not the right time or the right place to be using these systems,” Raghavan said.

“A lot of these companies and vendors seem well-intentioned — they think what they’re doing is actually very good for the world,” he said. “It’s in their interest to have people be confident in their practices.”

Go to Original Article
Author:

Google-Ascension deal reveals murky side of sharing health data

One of the largest nonprofit health systems in the U.S. created headlines when it was revealed that it was sharing patient data with Google — under codename Project Nightingale.

Ascension, a Catholic health system based in St. Louis, partnered with Google to transition the health system’s infrastructure to the Google Cloud Platform, to use the Google G Suite productivity and collaboration tools, and to explore the tech giant’s artificial intelligence and machine learning applications. By doing so, it is giving Google access to patient data, which the search giant can use to inform its own products.

The partnership appears to be technically and legally sound, according to experts. After news broke, Ascension released a statement saying the partnership is HIPAA-compliant and a business associate agreement, a contract required by the federal government that spells out each party’s responsibility for protected health information, is in place. Yet reports from The Wall Street Journal and The Guardian about the possible improper transfer of 50 million patients’ data has resulted in an Office for Civil Rights inquiry into the Google-Ascension partnership.

Legality aside, the resounding reaction to the partnership speaks to a lack of transparency in healthcare. Organizations should see the response as both an example of what not to do, as well as a call to make patients more aware of how they’re using health data, especially as consumer companies known for collecting and using data for profit become their partners.

Partnership breeds legal, ethical concerns

Forrester Research senior analyst Jeff Becker said Google entered into a similar strategic partnership with Mayo Clinic in September, and the coverage was largely positive.

Forrester Research senior analyst Jeff Becker Jeff Becker

According to a Mayo Clinic news release, the nonprofit academic medical center based in Rochester, Minn., selected Google Cloud to be “the cornerstone of its digital transformation,” and the clinic would use “advanced cloud computing, data analytics, machine learning and artificial intelligence” to improve healthcare delivery.

But Ascension wasn’t as forthcoming with its Google partnership. It was Google that announced its work with Ascension during a quarterly earnings call in July, and Ascension didn’t issue a news release about the partnership until after the news broke.

“There should have been a public-facing announcement of the partnership,” Becker said. “This was a PR failure. Secrecy creates distrust.”

Matthew Fisher, partner at Mirick O’Connell Attorneys at Law and chairman of its health law group, said the outcry over the Google-Ascension partnership was surprising. For years, tech companies have been trying to get access to patient data to help healthcare organizations and, at the same time, develop or refine their existing products, he said.

“I get the sense that just because it was Google that was announced to have been a partner, that’s what drove a lot of the attention,” he said. “Everyone knows Google mostly for purposes outside of healthcare, which leads to the concern of does Google understand the regulatory obligations and restrictions that come to bear by entering the healthcare space?”

Ascension’s statement in response to the situation said the partnership with Google is covered by a business associate agreement — a distinction Fisher said is “absolutely required” before any protected health information can be shared with Google. Parties in a business associate agreement are obligated by federal regulation to comply with the applicable portions of HIPAA, such as its security and privacy rules.

A business associate relationship allows identifiable patient information to be shared and used by Google only under specified circumstances. It is the legal basis for keeping patient data segregated and restricting Google from freely using that data. According to Ascension, the health system’s clinical data is housed within an Ascension-owned virtual private space in Google Cloud, and Google isn’t allowed to use the data for marketing or research.

“Our data will always be separate from Google’s consumer data, and it will never be used by Google for purposes such as targeting consumers for advertising,” the statement said.

Health IT and information security expert Kate Borten Kate Borten

But health IT and information security expert Kate Borten believes business associate agreements and the HIPAA privacy rule they adhere to don’t go far enough to ensure patient privacy rights, especially when companies like Google get involved. The HIPAA privacy rule doesn’t require healthcare organizations to disclose to patients who they’re sharing patient data with.

“The privacy rule says as long as you have this business associate contract — and business associates are defined by HIPAA very broadly — then the healthcare provider organization or insurer doesn’t have to tell the plan members or the patients about all these business associates who now have access to your data,” she said.

Chilmark Research senior analyst Jody Ranck said much of the alarm over the Google-Ascension partnership may be misplaced, but it speaks to a growing concern about companies like Google entering healthcare.

Since the Office for Civil Rights is looking into the partnership, Ranck said there is still a question of whether the partnership fully complies with the law. But the bigger question has to do with privacy and security concerns around collecting and using patient data, as well as companies like Google using patient data to train AI algorithms and the potential biases it could create.

All of this starts to feel like a bit of an algorithmic iron cage.
Jody RanckSenior analyst, Chilmark Research

Ranck believes consumer trust in tech companies is declining, especially as data privacy concerns get more play.

“Now that they know everything you purchase and they can listen in to that Alexa sitting beside your bed at night, and now they’re going to get access to health data … what’s a consumer to do? Where’s their power to control their destiny when algorithms are being used to assign you as a high-, medium-, or low-risk individual, as creditworthy?” Ranck said. “All of this starts to feel like a bit of an algorithmic iron cage.”

A call for more transparency

Healthcare organizations and big tech partnerships with the likes of Google, Amazon, Apple and Microsoft are growing. Like other industries, healthcare organizations are looking to modernize their infrastructure and take advantage of state of the art storage, security, data analytics tools and emerging tech like artificial intelligence.

But for healthcare organizations, partnerships like these have an added complexity — truly sensitive data. Forrester’s Becker said the mistake in the Google-Ascension partnership was the lack of transparency. There was no press release early on announcing the partnership, laying out what information is being shared, how the information will be used, and what outcome improvements the healthcare organization hopes to achieve.

“There should also be assurance that the partnership falls within HIPAA and that data will not be used for advertising or other commercial activities unrelated to the healthcare ambitions stated,” he said.

Fisher believes the Google-Ascension partnership raises questions about what the legal, moral and ethical aspects of these relationships are. While Ascension and Google may have been legally in the right, Fisher believes it’s important to recognize that privacy expectations are shifting, which calls for better consumer education, as well as more transparency around where and how data is being used.

Although he believes it would be “unduly burdensome” to require a healthcare organization to name every organization it shares data with, Fisher said better education on how HIPAA operates and what it allows when it comes to data sharing, as well as explaining how patient data will be protected when shared with a company like Google, could go a long way in helping patients understand what’s happening with their data.

“If you’re going to be contracting with one of these big-name companies that everyone has generalized concerns about with how they utilize data, you need to be ahead of the game,” Fisher said. “Even if you’re doing everything right from a legal standpoint, there’s still going to be a PR side to it. That’s really the practical reality of doing business. You want to be taking as many measures as you can to avoid the public backlash and having to be on the defensive by having the relationship found out and reported upon or discussed without trying to drive that discussion.”

Go to Original Article
Author: