Tag Archives: often

Epicor ERP system focuses on distribution

Many ERP systems try to be all things to all use cases, but that often comes at the expense of heavy customizations.

Some companies are discovering that a purpose-built ERP is a better and more cost-effective bet, particularly for small and midsize companies. One such product is the Epicor ERP system Prophet 21, which is primarily aimed at wholesale distributors.

The functionality in the Epicor ERP system is designed to help distributors run processes more efficiently and make better use of data flowing through the system.

In addition to distribution-focused functions, the Prophet 21 Epicor ERP system includes the ability to integrate value-added services, which could be valuable for distributors, said Mark Jensen, Epicor senior director of product management.

“A distributor can do manufacturing processes for their customers, or rentals, or field service and maintenance work. Those are three areas that we focused on with Prophet 21,” Jensen said.

Prophet 21’s functionality is particularly strong in managing inventory, including picking, packing and shipping goods, as well as receiving and put-away processes.

Specialized functions for distributors

Distribution companies that specialize in certain industries or products have different processes that Prophet 21 includes in its functions, Jensen said. For example, Prophet 21 has functionality designed specifically for tile and slab distributors.

“The ability to be able to work with the slab of granite or a slab of marble — what size it is, how much is left after it’s been cut, transporting that slab of granite or tile — is a very specific functionality, because you’re dealing with various sizes, colors, dimensions,” he said. “Being purpose-built gives [the Epicor ERP system] an advantage over competitors like Oracle, SAP, NetSuite, [which] either have to customize or rely on a third-party vendor to attach that kind of functionality.”

Jergens Industrial Supply, a wholesale supplies distributor based in Cleveland, has improved efficiency and is more responsive to shifting customer demands using Prophet 21, said Tony Filipovic, Jergens Industrial Supply (JIS) operations manager.

We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case.
Tony FilipovicOperations manager, Jergens Industrial Supply

“We like Prophet 21 because it’s geared toward distribution and was the leading product for distribution,” Filipovic said. “We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case. Prophet 21 is something that’s been top of line for years for resources distribution needs.”

One of the key differentiators for JIS was Prophet 21’s inventory management functionality, which was useful because distributors manage inventory differently than manufacturers, Filipovic said.

“All that functionality within that was key, and everything is under one package,” he said. “So from the moment you are quoting or entering an order to purchasing the product, receiving it, billing it, shipping it and paying for it was all streamlined under one system.”

Another key new feature is an IoT-enabled button similar to Amazon Dash buttons that enables customers to resupply stocks remotely. This allows JIS to “stay ahead of the click” and offer customers lower cost and more efficient delivery, Filipovic said.

“Online platforms are becoming more and more prevalent in our industry,” he said. “The Dash button allows customers to find out where we can get into their process and make things easier. We’ve got the ordering at the point where customers realize that when they need to stock, all they do is press the button and it saves multiple hours and days.”

Epicor Prophet 21 a strong contender in purpose-built ERP

Epicor Prophet 21 is on solid ground with its purpose-built ERP focus, but companies have other options they can look at, said Cindy Jutras, president of Mint Jutras, an ERP research and advisory firm in Windham, NH.

“Epicor Prophet 21 is a strong contender from a feature and function standpoint. I’m a fan of solutions that go that last mile for industry-specific functionality, and there aren’t all that many for wholesale distribution,” Jutras said. “Infor is pretty strong, NetSuite plays here, and then there a ton of little guys that aren’t as well-known.”

Prophet 21 may take advantage of new cloud capabilities to compete better in some global markets, said Predrag Jakovljevic, principal analyst at Technology Evaluation Centers, an enterprise computing analysis firm in Montreal.

“Of course a vertically-focused ERP is always advantageous, and Prophet 21 and Infor SX.e go head-to-head all the time in North America,” Jakovljevic said. “Prophet 21 is now getting cloud enabled and will be in Australia and the UK, where it might compete with NetSuite or Infor M3, which are global products.”

Go to Original Article
Author:

Why reducing hiring bias isn’t easy

Hiring bias — often unconscious — is one very pervasive issue that gets in the way of diversity and inclusion initiatives.

As the gatekeepers to employment, HR teams must recognize their biases. Recruiters form both conscious and unconscious biases when seeking out new candidates and may miss out on hiring someone who would excel within the company. Vendors promise AI technology and software can help fix this hiring bias, but it may not always help solve the problem.

In this Q&A, Stacia Sherman Garr, co-founder and head analyst at RedThread Research, discusses her thoughts on diversity and inclusion obstacles, why hiring biases exist and whether AI can fix this issue. 

Can you define diversity and inclusion?

Stacia Sherman GarrStacia Sherman Garr

Stacia Sherman Garr: Diversity is a variation in backgrounds, beliefs and experiences, with respect to gender, race, ethnicity, language and mental abilities. In a simplified version, there is visible diversity [such as gender], which we tend to see as being legally protected; and then invisible diversity [such as sexual orientation and class], which tends to be more of those things that are not immediately obvious but still influences people’s perspectives.

Inclusion is about allowing the equitable and fair distribution of resources within an organization, which allow all employees to be appreciated for their unique contributions and to feel they belong to the formal as well as informal networks within it.

What gets in the way of companies reaching this goal?

It’s not enough to just have an employee resource group; you actually need to be reinforcing a diverse, inclusive way of thinking.
Stacia Sherman GarrCo-founder and head analyst, RedThread Research

Garr: One of the biggest things is that, fundamentally, this is about the culture that an organization has, and changing a culture is difficult. When you think about it from that perspective, it means that diversity and inclusion can’t just be an on-the-side initiative. It’s not enough to just have an employee resource group; you actually need to be reinforcing a diverse, inclusive way of thinking. This factors into how HR teams acquire talent, promote employees, give feedback and coach people, so it is systemic change and that’s why it is hard. 

Why is there a sudden demand for diversity and inclusion?

Garr: Demographics, particularly in the United States, have been changing. We see an age demographic shift and an ethnicity demographic shift. Younger people, or people from underrepresented groups, are more likely to bring up their perspectives. They’re more likely to push issues and are less afraid to do it in a work context than the previous generations.

The second reason is that overall work has globalized, so we’re seeing workplaces becoming more multicultural with more freelancers or virtual work. 

Third, there is a relationship between diversity and inclusion and business outcomes. There’s research that shows this, so organizations are seeing the connection between diversity and inclusion and financial goals.   

Finally, #MeToo brought this all to a head. It wasn’t just that #MeToo was about sexual harassment. What #MeToo underscored for HR leaders, in particular, was the role of culture with regard to people being treated fairly and equitably in our workplace. If you look at the numbers of underrepresented groups in leadership, you see we’ve been working on this problem for years but things haven’t shifted. The combination of the heightened focus and frustration, plus the technological advances [such as in AI] are why we’ve seen a lot of the technology come to the fore.

What are your thoughts on the hiring bias as part of diversity and inclusion?

Garr: People aren’t necessarily conscious of bias, which makes this issue pretty complex. It could certainly be that every human has a bias toward people who look and talk and think like they do and that can seep into the hiring processes, whether it’s the recruiter or the hiring manager. It can be conscious in that they may think, “I have seen X type of person from Y university succeed here, therefore I think that that’s what we need in this role,” where that may not be the case in terms of those being the necessary factors to result in success.

When we then translate that into this conversation about AI, it’s important to note that AI is just advanced math. All it’s doing is pattern recognition and learning from previous patterns. If we as humans have had a challenge with bias in the past, then a technology whose sole job is to look at our patterns of the past and deduce from that is going to extrapolate some level of bias, if it goes unchecked.

Software vendors market products that would “fix” this hiring bias. What are your thoughts on that?

Garr: I do not believe it’s possible to completely eradicate all biases. I think that there are ways to reduce the biases that exist. With additional analysis capability we can see some of the things that humans have done that have bias within them, or indicated bias on a systemic level, and address those. It should be the obligation of vendors to work on this, and they should be very transparent about what they’re doing to address it. But I am skeptical of any vendor that says it has wholly eliminated bias.

In 2018, Amazon had to scrap an AI recruiting tool that showed bias against women. What does that say about the use of AI to improve diversity?

Garr: What it tells us first and foremost is that we shouldn’t allow engineers to run rampant without HR intervention. In that instance, HR was actually not at all a part of that technology development; it was done by a bunch of engineers in the business. It also underscores the importance of oversight and testing. Once developers have built something, it needs to go through rigorous tests to understand, “Is there bias here, and if there is, how can we address it?” Technology shows that there’s been an event in the past and we need to have some way of foreseeing that for the future. Unfortunately, it’s been painted as such, but I do not think it should be used to cast all AI in a negative light.

How important is cognitive diversity today in relation to this hiring bias?

Garr: It is a way for us to bring in different perspectives, to push for new ideas and to build things that haven’t been there before. Much innovation comes from the intersection of more than two existing knowledge bases that people haven’t combined before, and that is fundamentally cognitive diversity. But it’s important to not forget other diversity. There’s actually been some studies that show just by having visibly diverse individuals, it actually forces the other people in the group to take a different perspective. So cognitive diversity is important, but we shouldn’t forget visible diversity as well.

Let’s say companies have hired these diverse employees; what comes next in terms of inclusion?

Garr: Making sure that the organization has a culture that’s open to diverse perspectives. There’s a number of organizations that are using organizational network analysis to understand how to connect people into the organizations that works more effectively. And then there’s all sorts of tools that are available. Historical diversity tools such as employee resource groups or play action committees can help with some of this.

Then there is taking a hard look at all the various talent, practices and processes, and adjusting the organization’s approach so that they are open and aware of what’s necessary from an inclusion perspective. Heightening people’s awareness through all the different practices of an organization of what they need to do to be inclusive is really important.

Go to Original Article
Author:

How to repair Windows Server using Windows SFC and DISM

Over time, system files in a Windows Server installation might require a fix. You can often repair the operating…

system without taking the server down by using Windows SFC or the more robust and powerful Deployment Image Servicing and Management commands.

Windows System File Checker (SFC) and Deployment Image Servicing and Management (DISM) are administrative utilities that can alter system files, so they must be run in an administrator command prompt window.

Start with Windows SFC

The Windows SFC utility scans and verifies version information, file signatures and checksums for all protected system files on Windows desktop and server systems. If the command discovers missing protected files or alterations to existing ones, Windows SFC will attempt to replace the altered files with a pristine version from the %systemroot%system32dllcache folder.

The system logs all activities of the Windows SFC command to the %Windir%CBSCBS.log file. If the tool reports any nonrepairable errors, then you’ll want to investigate further. Search for the word corrupt to find most problems.

Windows SFC command syntax

Open a command prompt with administrator rights and run the following command to start the file checking process:

C:WindowsSystem32>sfc /scannow

The /scannow parameter instructs the command to run immediately. It can take some time to complete — up to 15 minutes on servers with large data drives is not unusual — and usually consumes 60%-80% of a single CPU for the duration of its execution. On servers with more than four cores, it will have a slight impact on performance.

Windows SFC scannow command
The Windows SFC /scannow command examines protected system files for errors.

There are times Windows SFC cannot replace altered files. This does not always indicate trouble. For example, recent Windows builds have included graphics driver data that was reported as corrupt, but the problem is with Windows file data, not the files themselves, so no repairs are needed.

If Windows SFC can’t fix it, try DISM

The DISM command is more powerful and capable than Windows SFC. It also checks a different file repository — the %windir%WinSXS folder, aka the “component store” — and is able to obtain replacement files from a variety of potential sources. Better yet, the command offers a quick way to check an image before attempting to diagnose or repair problems with that image.

Run DISM with the following parameters:

C:WindowsSystem32>dism /Online /Cleanup-Image /CheckHealth

Even on a server with a huge system volume, this command usually completes in less than 30 seconds and does not tax system resources. Unless it finds some kind of issue, the command reports back “No component store corruption detected.” If the command finds a problem, this version of DISM reports only that corruption was detected, but no supporting details.

Corruption detected? Try ScanHealth next

If DISM finds a problem, then run the following command:

C:WindowsSystem32>dism /Online /Cleanup-Image /ScanHealth

This more elaborate version of the DISM image check will report on component store corruption and indicate if repairs can be made.

If corruption is found and it can be repaired, it’s time to fire up the /RestoreHealth directive, which can also work from the /online image, or from a different targeted /source.

Run the following commands using the /RestoreHealth parameter to replace corrupt component store entries:

C:WindowsSystem32>dism /Online /Cleanup-Image /RestoreHealth

C:WindowsSystem32>dism /source: /Cleanup-Image /RestoreHealth

You can drive file replacement from the running online image easily with the same syntax as the preceding commands. But it often happens that local copies aren’t available or are no more correct than the contents of the local component store itself. In that case, use the /source directive to point to a Windows image file — a .wim file or an .esd file — or a known, good, working WinSXS folder from an identically configured machine — or a known good backup of the same machine to try alternative replacements.

By default, the DISM command will also try downloading components from the Microsoft download pages; this can be turned off with the /LimitAccess parameter. For details on the /source directive syntax, the TechNet article “Repair a Windows Image” is invaluable.

DISM is a very capable tool well beyond this basic image repair maneuver. I’ve compared it to a Swiss army knife for maintaining Windows images. Windows system admins will find DISM to be complex and sometimes challenging but well worth exploring.

Go to Original Article
Author:

Gartner Names Microsoft a Leader in the 2019 Enterprise Information Archiving (EIA) Magic Quadrant – Microsoft Security

We often hear from customers about the explosion of data, and the challenge this presents for organizations in remaining compliant and protecting their information. We’ve invested in capabilities across the landscape of information protection and information governance, inclusive of archiving, retention, eDiscovery and communications supervision. In Gartner’s annual Magic Quadrant for Enterprise Information Archiving (EIA), Microsoft was named a Leader again in 2019.

According to Gartner, “Leaders have the highest combined measures of Ability to Execute and Completeness of Vision. They may have the most comprehensive and scalable products. In terms of vision, they are perceived to be thought leaders, with well-articulated plans for ease of use, product breadth and how to address scalability.” We believe this recognition represents our ability to provide best-in-class protection and deliver on innovations that keep pace with today’s compliance needs.

This recognition comes at a great point in our product journey. We are continuing to invest in solutions that are integrated into Office 365 and address information protection and information governance needs of customers. Earlier this month, at our Ignite 2019 conference, we announced updates to our compliance portfolio including new data connectors, machine learning powered governance, retention, discovery and supervision – and innovative capabilities such as threading Microsoft Teams or Yammer messages into conversations, allowing you to efficiently review and export complete dialogues with context, not just individual messages. In customer conversations, many of them say these are the types of advancements that are helping them be more efficient with their compliance requirements, without impacting end-user productivity.

Learn more

Read the complimentary report for the analysis behind Microsoft’s position as a Leader.

For more information about our Information Archiving solution, visit our website and stay up to date with our blog.

Gartner Magic Quadrant for Enterprise Information Archiving, Julian Tirsu, Michael Hoeck, 20 November 2019.

*This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Go to Original Article
Author: Steve Clarke

Kasten backup aims for secure Kubernetes protection

People often talk about Kubernetes “Day 1,” when you get the platform up and running. Now Kasten wants to help with “Day 2.”

Kasten’s K10 is a data management and backup platform for Kubernetes. The latest release, K10 2.0, focuses on security and simplicity.

K10 2.0 includes support for Kubernetes authentication, role-based access control, OpenID Connect, AWS Identity and Access Management roles, customer-managed keys, and integrated encryption of artifacts at rest and in flight.

“Once you put data into storage, the Day 2 operations are critical,” said Krishnan Subramanian, chief research advisor at Rishidot Research. “Day 2 is as critical as Day 1.”

Day 2 — which includes data protection, mobility, backup and restore, and disaster recovery — is becoming a pain point for Kubernetes users, Kasten CEO Niraj Tolia said.

“In 2.0, we are focused on making Kubernetes backup easy and secure,” Tolia said.

Other features the new Kasten backup software offers, which became generally available earlier in November, include a Kubernetes-native API, auto-discovery of the application environment, policy-driven operations, multi-tenancy support, and advanced logging and monitoring. The Kasten backup enables teams to operate their environments, while supporting developers’ ability to use tools of their choice, according to the vendor.

Kasten K10 dashboard screenshot
Kasten K10 provides data management and backup for Kubernetes.

Kasten backup eyes market opportunity

Kasten, which launched its original product in December 2017, generally releases an update to its customers every two weeks. A typical update that’s not as major as 2.0 typically has bug fixes, new features and increased depth in current features. Tolia said there were 55 releases between 1.0 and 2.0.

Day 2 is as critical as Day 1.
Krishnan SubramanianFounder and chief research advisor, Rishidot Research

Backup for container storage has become a hot trend in data protection. Kubernetes specifically is an open source system used to manage containers across private, public and hybrid cloud environments. Kubernetes can be used to manage microservice architectures and is deployable on most cloud providers.

“Everyone’s waking up to the fact that this is going to be the next VMware,” as in, the next infrastructure of choice, Tolia said.

Kubernetes backup products are popping up, but it looks like Kasten is a bit ahead of its time, Rishidot’s Subramanian said. He said he is seeing more enterprises using Kubernetes in production, for example, in moving legacy workloads to the platform, and that makes backup a critical element.

“Kubernetes is just starting to take off,” Subramanian said.

Kubernetes backup “has really taken off in the last two or three quarters,” Tolia said.

Subramanian said he is starting to see legacy vendors such as Dell EMC and NetApp tackling Kubernetes backup, as well as smaller vendors such as Portworx and Robin. He said Kasten had needed stronger security but caught up with K10 2.0. Down the road, he said he will look for Kasten to improve its governance and analytics.

Tolia said Kasten backup stands out because it’s “purpose-built for Kubernetes” and extends into multilayered data management.

In August, Kasten, which is based in Los Altos, Calif., closed a $14 million Series A funding round, led by Insight Partners. Tolia did not give Kasten’s customer count but said it has deployments across multiple continents.

Go to Original Article
Author:

How to achieve explainability in AI models

When machine learning models deliver problematic results, it can often happen in ways that humans can’t make sense of — and this becomes dangerous when there are no limitations of the model, particularly for high-stakes decisions. Without straightforward and simple tools that highlight explainability in AI models, organizations will continue to struggle in implementing AI algorithms. Explainable AI refers to the process of making it easier for humans to understand how a given model generates the results it does and planning for cases when the results should be second-guessed.

AI developers need to incorporate explainability techniques into their workflows as part of their overall modeling operations. AI explainability can refer to the process of creating algorithms for teasing apart how black box models deliver results or the process of translating these results to different types of people. Data science managers working on explainable AI should keep tabs on the data used in models, strike a balance between accuracy and explainability, and focus on the end user.

Opening the black box

Traditional rule-based AI systems included explainability in AI as part of models, since humans would typically handcraft the inputs to output. But deep learning techniques using semi-autonomous neural-network models can’t provide a model’s results map to an intended goal.

Researchers are working to build learning algorithms that generate explainable AI systems from data. Currently, however, most of the dominant learning algorithms do not yield interpretable AI systems, said Ankur Taly, head of data science at Fiddler Labs, an explainable AI tools provider.

“This results in black box ML techniques, which may generate accurate AI systems, but it’s harder to trust them since we don’t know how these systems’ outputs are generated,” he said. 

AI explainability often describes post-hoc processes that attempt to explain the behavior of AI systems, rather than alter their structure. Other machine learning model properties like accuracy are straightforward to measure, but there are no corresponding simple metrics for explainability. Thus, the quality of an explanation or interpretation of an AI system needs to be assessed in an application-specific manner. It’s also important for practitioners to understand the assumptions and limitations of the techniques they use for implementing explainability.

“While it is better to have some transparency rather than none, we’ve seen teams fool themselves into a false sense of security by wiring an off-the-shelf technique without understanding how the technique works,” Taly said. 

Start with the data

The results of a machine learning model could be explained by the training data itself, or how a neural network interprets a dataset. Machine learning models often start with data labeled by humans. Data scientists can sometimes explain the way a model is behaving by looking at the data it was trained on.

“What a particular neural network derives from a dataset are patterns that it finds that may or may not be obvious to humans,” said Aaron Edell, director of applied AI at AI platform Veritone.

But it can be hard to understand what good data looks like. Biased training data can show in up a variety of ways. A machine learning model trained to identify sheep might only come from pictures of farms, causing the model to misinterpret sheep in other settings, or white clouds on farm pictures as sheep. Facial recognition software can be trained on company faces — but if those faces are mostly male or white, the data is biased.

One good practice is to train machine learning models on data that should be indistinguishable from the data the model will be expected to run on. For example, a face recognition model that identified how long Jennifer Aniston appears in every episode of Friends should be trained on frames of actual episodes rather than Google image search results for ‘Jennifer Aniston.’ In a similar vein, it’s OK to train models on publicly available datasets, but generic pre-trained models as a service will be harder to explain and change if necessary.   

Balancing explainability, accuracy and risk

The real problem with implementing explainability in AI is that there are major trade-offs between accuracy, transparency and risk in different types of AI models, said Matthew Nolan, senior director of decision sciences at Pegasystems. More opaque models may be more accurate, but fail the explainability test. Other types of models like decision trees and Bayesian networks are considered more transparent but are less powerful and complex.

“These models are critical today as businesses deal with regulations such as like GDPR that require explainability in AI-based systems, but this sometimes will sacrifice performance,” said Nolan.

Focusing on transparency can cost a business, but turning to more opaque models can leave a model unchecked and might expose the consumer, customer and the business to additional risks or breaches.

To address this gap, platform vendors are starting to embed transparency settings into their AI tool sets. This can make it easier to companies to adjust the acceptable amount of opaqueness or transparency thresholds used in their AI models and gives enterprises the control to adjust the models based on their needs or on corporate governance policy so they can manage risk, maintain regulatory compliance and ensure customers a differentiated experience in a responsible way.

Data scientists should also identify when the complexity of new models are getting in the way of explainability. Yifei Huang, data science manager at sales engagement platform Outreach, said there are often simpler models available for attaining the same performance, but machine learning practitioners have a tendency toward using more fancy and advanced models.

Focus on the user

Explainability means different things to a highly skilled data scientist compared to a call center worker who may need to make decisions based on an explanation. The task of implementing explainable AI is not just to foster trust in explanations but also help the end users make decisions, said Ankkur Teredesai, CTO and co-founder at KenSci, an AI healthcare platform.

Often data scientists make the mistake of thinking about explanations from the perspective of a computer scientist, when the end user is a domain expert who may need just enough information to make a decision. For a model that predicts the risk of a patient being readmitted, a physician may want an explanation of the underlying medical reasons, while a discharge planner may want to know the likelihood of readmission to plan accordingly.

Teredesai said there is still no general guideline for explainability, particularly for different types of users. It’s also challenging to integrate these explanations into the machine learning and end user workflows. End users typically need explanations as possible actions to take based on a prediction rather than just explanation as reasons, and this requires striking the right balance between focusing on prediction and explanation fidelity.

There are a variety of tools for implementing explainability on top of machine learning models which generate visualizations and technical descriptions, but these can be difficult for end users to understand, said Jen Underwood, vice president of product management at Aible, an automated machine learning platform. Supplementing visualizations with natural language explanations is a way to partially bridge the data science literacy gap. Another good practice is to directly use humans in the loop to evaluate your explanations to see if they make sense to a human, said Daniel Fagnan, director of applied science on the Zillow Offers Analytics team. This can help lead to more accurate models through key improvements including model selection and feature engineering.

KPIs for AI risks

Enterprises should consider the specific reasons that explainable AI is important when looking towards how to measure explainability and accessibility. Teams should first and foremost establish a set of criteria for key AI risks including robustness, data privacy, bias, fairness, explainability and compliance, said Dr. Joydeep Ghosh, chief scientific officer at AI vendor CognitiveScale. It’s also useful to generate appropriate metrics for key stakeholders relevant to their needs.

External organizations like AI Global can help establish measurement targets that determine acceptable operating values. AI Global is a nonprofit organization that has established the AI Trust Index, a scoring benchmarks for explainable AI that is like a FICO score. This enables firms to not only establish their own best practices, but also compare the enterprise against industry benchmarks.

When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application.
Mark StefikResearch Fellow, PARC, a Xerox Company

Vendors are starting to automate this process with tools for automatically scoring, measuring and reporting on risk factors across the AI operations lifecycle based on the AI Trust Index. Although the tools for explainable AI are getting better, the technology is at an early research stage with proof-of-concept prototypes, cautioned Mark Stefik, a research fellow at PARC, a Xerox Company. There are substantial technology risks and gaps in machine learning and in AI explanations, depending on the application.

“When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application,” Stefik said.

Go to Original Article
Author:

Azure AD + F5—helping you secure all your applications

7 hours ago

Howdy folks,

We often hear from our customers about the complexities around providing seamless and secure user access to their applications—from cloud SaaS applications to legacy on-premises applications. Based on your feedback, we’ve worked to securely connect any app, on any cloud or server—through a variety of methods. And today, I’m thrilled to announce our deep integration with F5 Networks that simplifies secure access to your legacy applications that use protocols like header-based and Kerberos authentication.

By centralizing access to all your applications, you can leverage all the benefits that Azure AD offers. Through the F5 and Azure AD integration, you can now protect your legacy-auth based applications by applying Azure AD Conditional Access policies to leverage our Identity Protection engine to detect user risk and sign-in risk, as well as manage and monitor access through our identity governance capabilities. Your users can also gain single sign-on (SSO) and use passwordless authentication to these legacy-auth based applications.

To help you get started, we made it easier to publish these legacy-auth based applications by making the F5-BIG IP Application Policy Manager available in the Azure AD app gallery. You can learn how to configure your legacy-auth based applications by reviewing our documentation below based on the app type and scenario:

As always, let us know your feedback, thoughts, and suggestions in the comments below, so we can continue to build capabilities that help you securely connect any app, on any cloud, for every user.

Best regards,

Alex Simons (@Alex_A_Simons)

Corporate VP of Program Management

Microsoft Identity Division

Go to Original Article
Author: Microsoft News Center

Automated incident response in Office 365 ATP now generally available

Security teams responsible for investigating and responding to incidents often deal with a massive number of signals from widely disparate sources. As a result, rapid and efficient incident response continues to be the biggest challenge facing security teams today. The sheer volume of these signals, combined with an ever-growing digital estate of organizations, means that a lot of critical alerts miss getting the timely attention they deserve. Security teams need help to scale better, be more efficient, focus on the right issues, and deal with incidents in a timely manner.

This is why I’m excited to announce the general availability of Automated Incident Response in Office 365 Advanced Threat Protection (ATP). Applying these powerful automation capabilities to investigation and response workflows can dramatically improve the effectiveness and efficiency of your organization’s security teams.

A day in the life of a security analyst

To give you an idea of the complexity that security teams deal with in the absence of automation, consider the following typical workflow that these teams go through when investigating alerts:

Infographic showing these steps: Alert, Analyze, Investigate, Assess impact, Contain, and Respond.

And as they go through this flow for every single alert—potentially hundreds in a week—it can quickly become overwhelming. In addition, the analysis and investigation often require correlating signals across multiple different systems. This can make effective and timely response very difficult and costly. There are just too many alerts to investigate and signals to correlate for today’s lean security teams.

To address these challenges, earlier this year we announced the preview of powerful automation capabilities to help improve the efficiency of security teams significantly. The security playbooks we introduced address some of the most common threats that security teams investigate in their day-to-day jobs and are modeled on their typical workflows.

This story from Ithaca College reflects some of the feedback we received from customers of the preview of these capabilities, including:

“The incident detection and response capabilities we get with Office 365 ATP give us far more coverage than we’ve had before. This is a really big deal for us.”
—Jason Youngers, Director and Information Security Officer, Ithaca College

Two categories of automation now generally available

Today, we’re announcing the general availability of two categories of automation—automatic and manually triggered investigations:

  1. Automatic investigations that are triggered when alerts are raisedAlerts and related playbooks for the following scenarios are now available:
    • User-reported phishing emails—When a user reports what they believe to be a phishing email, an alert is raised triggering an automatic investigation.
    • User clicks a malicious link with changed verdict—An alert is raised when a user clicks a URL, which is wrapped by Office 365 ATP Safe Links, and is determined to be malicious through detonation (change in verdict). Or if the user clicks through the Office 365 ATP Safe Links warning pages an alert is also raised. In both cases, the automated investigation kicks in as soon as the alert is raised.
    • Malware detected post-delivery (Malware Zero-Hour Auto Purge (ZAP))—When Office 365 ATP detects and/or ZAPs an email with malware, an alert triggers an automatic investigation.
    • Phish detected post-delivery (Phish ZAP)—When Office 365 ATP detects and/or ZAPs a phishing email previously delivered to a user’s mailbox, an alert triggers an automatic investigation.
  1. Manually triggered investigations that follow an automated playbook—Security teams can trigger automated investigations from within the Threat Explorer at any time for any email and related content (attachment or URLs).

Rich security playbooks

In each of the above cases, the automation follows rich security playbooks. These playbooks are essentially a series of carefully logged steps to comprehensively investigate an alert and offer a set of recommended actions for containment and mitigation. They correlate similar emails sent or received within the organization and any suspicious activities for relevant users. Flagged activities for users might include mail forwarding, mail delegation, Office 365 Data Loss Prevention (DLP) violations, or suspicious email sending patterns.

In addition, aligned with our Microsoft Threat Protection promise, these playbooks also integrate with signals and detections from Microsoft Cloud App Security and Microsoft Defender ATP. For instance, anomalies detected by Microsoft Cloud App Security are ingested as part of these playbooks. And the playbooks also trigger device investigations with Microsoft Defender ATP (for malware playbooks) where appropriate.

Let’s look at each of these automation scenarios in detail:

User reports a phishing email—This represents one of the most common flows investigated today. The alert is raised when a user reports a phish email using the Report message add-in in Outlook or Outlook on the web and triggers an automatic investigation using the User Reported Message playbook.

Screenshot of a phishing email being investigated.

User clicks on a malicious linkA very common vector used by attackers is to weaponize a link after delivery of an email. With Office 365 ATP Safe Links protection, we can detect such attacks when links are detonated at time-of-click. A user clicking such links and/or overriding the Safe Links warning pages is at risk of compromise. The alert raised when a malicious URL is clicked triggers an automatic investigation using the URL verdict change playbook to correlate any similar emails and any suspicious activities for the relevant users across Office 365.

Image of a clicked URL being assigned as malicious.

Email messages containing malware removed after delivery—One of the critical pillars of protection in Office 365 Exchange Online Protection (EOP) and Office 365 ATP is our capability to ZAP malicious emails. Email messages containing malware removed after delivery alert trigger an investigation into similar emails and related user actions in Office 365 for the period when the emails were present in a user’s inbox. In addition, the playbook also triggers an investigation into the relevant devices for the users by leveraging the native integration with Microsoft Defender ATP.

Screenshot showing malware being zapped.

Email messages containing phish removed after deliveryWith the rise in phishing attack vectors, Office 365 EOP and Office 365 ATP’s ability to ZAP malicious emails detected after delivery is a critical protection feature. The alert raised triggers an investigation into similar emails and related user actions in Office 365 for the period when the emails were present in a user’s inbox and also evaluates if the user clicked any of the links.

Screenshot of a phish URL being zapped.

Automated investigation triggered from within the Threat Explorer—As part of existing hunting or security operations workflows, Security teams can also trigger automated investigations on emails (and related URLs and attachments) from within the Threat Explorer. This provides Security Operations (SecOps) a powerful mechanism to gain insights into any threats and related mitigations or containment recommendations from Office 365.

Screenshot of an action being taken in the Office 365 Security and Compliance dash. An email is being investigated.

Try out these capabilities

Based on feedback from our public preview of these automation capabilities, we extended the Office 365 ATP events and alerts available in the Office 365 Management API to include links to these automated investigations and related artifacts. This helps security teams integrate these automation capabilities into existing security workflow solutions, such as SIEMs.

These capabilities are available as part of the following offerings. We hope you’ll give it a try.

Bringing SecOps efficiency by connecting the dots between disparate threat signals is a key promise of Microsoft Threat Protection. The integration across Microsoft Threat Protection helps bring broad and valuable insights that are critical to the incident response process. Get started with a Microsoft Threat Protection trial if you want to experience the comprehensive and integrated protection that Microsoft Threat Protection provides.

Go to Original Article
Author: Microsoft News Center

Machine intelligence could benefit from child’s play

CAMBRIDGE — Current progress in machine intelligence is newsworthy, but it’s often talked about out of context. MIT’s Josh Tenenbaum described it this way: Advances in deep learning are powering machines to accurately recognize patterns, but human intelligence is not just about pattern recognition, it’s about modeling the world.

By that, Tenenbaum, a professor of cognitive science and computation, was referring to abilities that humans possess such as understanding what they see, imagining what they haven’t seen, problem-solving, planning, and building new models as they learn.

That’s why, in the interest of advancing AI even further, Tenenbaum is turning to the best source of information on how humans build models of the world: children.

“Imagine if we could build a machine that grows into intelligence the way a person does — that starts like a baby and learns like a child,” he said during his presentation at EmTech 2018, an emerging technology conference hosted by MIT Technology Review.

Tenenbaum called the project a “moonshot,” one of several that researchers at MIT are exploring as part of the university’s new MIT Quest for Intelligence initiative to advance the understanding of human and machine intelligence. The “learning moonshot” is a collaborative effort by MIT colleagues, including AI experts, as well as those in early childhood development and neuroscience. The hope is to use how children learn as a blueprint to build a machine intelligence that’s truly capable of learning, Tenenbaum said.

The “quest,” as it’s aptly labeled, won’t be easy partly because researchers don’t have a firm understanding for how learning happens, according to Tenenbaum. In the 1950s, Alan Turing, father the Turing test to analyze machine intelligence, presumed a child’s brain was simpler than an adult’s and that it was akin to a new notebook full of blank pages.

“We’ve now learned that Turing was brilliant, but he got this one wrong,” Tenenbaum said. “And many AI researchers have gotten this wrong.”

Child’s play is serious business

Instead, research such as that done by Tenenbaum’s colleagues suggests that newborns are already programmed to see and understand the world in terms of people, places and things — not just patterns and pixels. And that children aren’t passive learners but instead are actively experimenting, interacting with and exploring the world around them.

Imagine if we could build a machine that grows into intelligence the way a person does — that starts like a baby and learns like a child.
Josh Tenenbaumprofessor of cognitive science and computation, MIT

“Just like science is playing around in the lab, children’s play is serious business,” he said. “And children’s play may be what makes human beings the smartest learners in the known universe.”

Tenenbaum described his job as identifying insights like these and translating them into engineering terms. Take common sense, for example. Kids are capable of stacking cups or blocks without a single physics lesson. They can observe an action they’ve never seen before and yet understand the desired outcome and how to help achieve that outcome.

In an effort to codify common sense, Tenenbaum and his team are working with new kinds of AI programming languages that leverage the pattern recognition advances by neural networks, as well as concepts that don’t fit neatly into neural networks. One example of this is probabilistic inference, which enables machines to use prior events to predict future events.

Game engines open window into learning

Tenenbaum’s team is also using game engines. These are used to simulate a player’s experience in real time in a virtual world. Common game engines include the graphics engine used to design 2D and 3D images, and the physics engine, which transposes the laws of physics from the real to the virtual world. “We think they provide first approximations to the kinds of basic commonsense knowledge representation that are built into even the youngest children’s brains,” he said.

He said the game engines, coupled with probabilistic programming, capture data that helps researchers understand what a baby knows at 10 months or a year old, but the question remains: How does a baby learn how to build engines like these?

“Evolution might have given us something kind of like game engine programs, but then learning for a baby is learning to program the game engine to capture the program of their life,” he said. “That means learning algorithms have to be programming algorithms — a program that learns programs.”

Tenenbaum called this “the hard problem of learning.” To solve it, he’s focused on the easier problem of how people acquire simple visual concepts such as learning a character from a new alphabet without needing to see it a thousand times. Using Bayesian program learning, a machine learning method, researchers have been able to program machines to see an output, such as a character, and deduce how the output was created from one example.

It’s an admittedly small step in the larger landscape of machine intelligence, Tenenbaum said. “But we also know from history that even small steps toward this goal can be big.”

How HR can help with a digital divide in the U.S.

Technology is often a solution to modern HR challenges, but HR executive Robin Schooling was quick to point out that tech can also be a problem for employees without equal access or related skills.

For example, according to a Purdue University report released earlier this year, “Job and establishment growth between 2010 and 2015 was substantially lower in [U.S.] counties with the highest digital divide.”

Schooling, who is vice president of HR at Hollywood Casino in Baton Rouge, La., will speak about the digital divide in the U.S. at the forthcoming HR Technology Conference in Las Vegas. SearchHRSoftware is the media partner for the conference.

In a preview of her remarks, Schooling outlined the challenges and explained why her three-person HR team sometimes takes an old-school, hands-on approach to everything from training to benefits enrollment.

This interview was edited lightly for brevity and clarity.

When you talk about the digital divide in the U.S., what do you mean?

Robin Schooling: The tendency is to think, as we automate more and more things, that people and job seekers are just going to go right along with all of it. We think that all job seekers or employees have the same level of technology at their disposal. We think the knowledge base is there and that they have the sophistication to go along with the online journey, from job searching to applying for jobs to the onboarding programs. And then, once they’re in-house, they’re ready for any online training.

Robin Schooling, vice president of HR, Hollywood CasinoRobin Schooling

We’re sort of assuming everybody in the workforce is at a desk in a high-rise and that technical knowledge is at [their] disposal. There is an entire group of workers and industries that I fear are getting left further and further behind. It’s the technology haves and have-nots.

How does email play into the digital divide in the U.S.?

Schooling: In my particular world, I have less than 20% of my employees who have work email or access to network drives. They are not at a desk; they are on the floor in front of customers, and they don’t use technology day to day in their jobs. So, they don’t have company email addresses.

If you go back further, a lot of my job applicants don’t have personal email addresses. I probably have, on average, two people a week we talk to who apply for a job using somebody else’s email address. We discover when get hold of them they never got our email and that the person whose email address they used ‘didn’t let me know it came.’ Or, people make up email addresses.

As we talk to people, we find out there truly are people who just don’t have email. It’s not an age thing. I see people who are 22, and I see people who are 70. We get a lot of calls.

[When we get a resume emailed to us,] we have auto reply, but sometimes it goes to a spam folder, and they don’t know how to check a spam folder. We have applicants who don’t have desktops or tablets. Their phone is it. And they don’t know how to navigate email addresses even if they have email. Instead, we get a lot of calls. It’s like 1989.

So, we’ve tried things with texting. We’ve tried a cellphone call. But it gets back to the low-wage, entry-level worker with a pay-as-you-go phone. They tried to apply, but we can’t reach them on it. It’s a challenge. I don’t know the answer to the problem. But we have to look at finding multiple ways to connect with people on the applicant side.

There’s a digital divide in the U.S., but change has to start somewhere. What are you doing in the office to help the tech have-nots?

Schooling: As we have automated, we enhanced some of the offerings through the system we have for our employees, because we don’t have folks who sit at a desk. We can put things out in the cloud to our employee self-service portal, but we’re still struggling with employees getting access if they’re doing it through their phone.

We have banks of computers in the break room, but many employees have challenges accessing them. They don’t know how to use a keyboard or a mouse. That still exists. Everybody doesn’t work on the East or West Coasts, and we’re not all on Slack.

And it’s not just the tech providers [that contribute to the digital divide in the U.S.]; it’s the HR service providers. Every year since I’ve been here, as part of our wellness initiatives, we have a third party come in and do biometric screenings. You get a checkup and sit with a nurse practitioner who logs you in to your account. We do this so that we get you into this wellness tracker for follow-up steps. In order to do that, you have to set up your own account, and that requires an email address and two-factor authentication.

Here’s the challenge: I’ve got a good 30 employees that do not have a personal email address. They may have access to their mothers’ or their wives’ or their sisters’ [email addresses]. I told the vendor that people coming in can’t set up an account because they do not have a personal email, or they are using a family member’s [email] who is not there to do two-factor authentication. How do we do that? How do we serve those people? The provider didn’t quite believe me.

We know the people that don’t have email. How do we solve this? As an HR team, what we do for our population is we try to help people as much as we can to set up accounts. We spend quite a bit of time amongst the team going to Gmail to set up an account and set it up on their phone for them. We try to help them create passwords and show them how to remember where it’s stored. It’s a challenge to do this one-on-one to help our folks as much as we can, but I think it’s important.

In my particular world, I have less than 20% of my employees who have work email or access to network drives.
Robin Schoolingvice president of HR, Hollywood Casino

It’s like old-school HR. It’s very hands-on and in your face. If you have a small or midsize business and your workforce is all in one place, what you can do there is kind of what we do. It’s handholding and bringing along one person at a time. Sometimes, it’s even just stopping and asking: If we are going to communicate something or expect an employee to go to a website to accomplish something for a job, are they equipped to do that?

When we think about the digital divide in the U.S., what can be done to help narrow it?

Schooling: This is an issue that worries me. We are just getting further and further away from thinking about those people who need to find jobs or are working hard, but are sitting in companies that don’t realize that perhaps there are folks being left behind. These are people who can’t do a really cool learning module on their phone.

I have people that are not hipsters — they have a flip phone, but don’t have a data plan. And if there is Wi-Fi, they need to have someone show them how to hook up. They can’t sit at home and do onboarding videos or learning snippets.

At the end of the day, I’m thinking about it from [different] sides. Are the vendors remembering when creating products to include the whole audience? Are HR practitioners aware that you need to do more than meet them where they are, but actually bring them with you?

And it’s important to remember it’s not a generational thing. Some folks coming out of school, college grads even, find themselves in this boat. They come from low-income families, and they’ve gotten higher ed, but they struggle with the access to the tools and the tech and the knowledge of how to use them.