Tag Archives: often

Arcserve enhances portfolio of Sophos-secured backup

Restoring from backups is often the last resort when data is compromised by ransomware, but savvy criminals are also targeting those backups.

Arcserve enhanced its Sophos partnership to provide cybersecurity aimed at safeguarding backups, preventing cybercriminals from taking out organizations’ last line of ransomware defense. The Secured by Sophos line of Arcserve products, originally consisting of on-premises appliances that integrated Arcserve backup and Sophos security, extended its coverage to SaaS and cloud with two new entries: Arcserve Cloud Backup for Office 365 and Arcserve Unified Data Protection (UDP) Cloud Hybrid.

Arcserve UDP Cloud Hybrid Secured by Sophos is an extension to existing Arcserve software and appliances. It replicates data to the cloud, and the integrated Sophos Intercept X Advanced software scans the copies for malware and other security threats. The Sophos software recognizes the difference between encryption performed by normal backup processes and unauthorized encryption from bad actors.

Arcserve Cloud Backup for Office 365 Secured by Sophos is a stand-alone product for protecting and securing Office 365 data. It also uses Sophos Intercept X Advanced endpoint security, and it can do backup and restore for Microsoft Exchange emails, OneDrive and SharePoint.

Both new products are sold on an annual subscription model, with pricing based on storage and compute.

IDC research director Phil Goodwin described what has been an escalating battle between organizations and cybercriminals. Data protection vendors keep improving their products, and organizations keep learning more about backups. This trend allows companies to quickly and reliably restore their data from backups and avoid paying ransoms. Criminals, in turn, learn to target backups.

“Bad guys are increasingly attacking backup sets,” Goodwin said.

Arcserve’s Secured by Sophos products combines security and backup, specifically protecting backup data from cyberthreats. Organizations can realign their security to encompass backup data, but Arcserve’s products provide security out of the box. Goodwin said Acronis is the only other vendor he could think of that has security integrated into backup, while others such as IBM have data protection and security as separate SKUs.

diagram of Arcserve Office 365 backup secured by Sophos
Sophos Intercept X Advanced is now running in the cloud, scanning Office 365 backup data for malware.

From a development standpoint, security and data protection call on different skill sets, but both are necessary for combating ransomware. Goodwin said combining the two makes for stronger defense system.

Oussama El-Hilali, CTO at Arcserve, said adding Office 365 to the Secured by Sophos line was important because more businesses are adopting the platform than in the past. There was already an upward trend of businesses putting mission-critical data on SharePoint and OneDrive, but the boost in remote work deployments caused by the COVID-19 pandemic accelerated that.

El-Hilali said the pandemic has increased the need for protecting data in clouds and SaaS applications more for SMBs than enterprises, because larger organizations may have large, on-premises storage arrays they can use. The Office 365 product is sold stand-alone because many smaller businesses only need an Office 365 data protection component, and nothing for on premises.

“The [coronavirus] impact is more visible in the SMB market. A small business is probably using a lot of SaaS, and probably doesn’t have a lot of data on-prem,” El-Hilali said.

Unfortunately, its Office 365’s native data retention, backup and security features are insufficient in a world where many users are accessing their data from endpoint and mobile devices. Goodwin said there is a strong market need, and third parties such as Arcserve are taking that chance.

“There’s a big opportunity there with Office 365 — it’s one of the greatest areas of vulnerability from the perspective of SaaS apps,” Goodwin said.

Go to Original Article

Enterprises struggle to learn Microsoft Sonic networking

Enterprises learning how to use Microsoft Sonic in a production environment often struggle with the lack of management tools for the open source network operating system.

Other challenges revealed this week during a panel discussion at the OCP Virtual Summit included weak support for Sonic hardware. Also, the panelists said engineers had to work hard to understand how to operate the software.

The companies that participated in the discussion included Target, eBay, T-Mobile, Comcast and Criteo. All of them plan to eventually make Sonic their primary network operating system in the data center.

In general, they are seeking vendor independence and more control over the development and direction of their networks. They expected to achieve network automation similar to Sonic customers Facebook and Microsoft, which built Sonic and gave it to the Open Compute Project (OCP) for further development.

Challenges with Microsoft Sonic

Target is at the tail end of its evaluation of Sonic. The retailer plans to use it to power a single super-spine within a data center fabric, said Pablo Espinosa, vice president of engineering. The company plans to put a small percentage of a production workload on the network operating system (NOS) in the next quarter.

Eventually, Target wants to use Sonic to provide network connectivity to hundreds of microservices running on cloud computing environments. Target has virtualized almost 85% of its data centers to support cloud computing.

Target’s engineers have experience in writing enterprise software but not code to run on a NOS. Therefore, the learning curve has been steep, Espinosa said. “We’re still building this muscle.”

As a result, Target has turned to consultants to develop enterprise features for Sonic and take it through hardware testing, regression testing and more, Espinosa said.

Online advertising company Criteo was the only panel participant to have Sonic in production. The company is using the NOS on the spine and super-spine level in one of nine network fabrics, engineering manager Thomas Soupault said. The system has 64 network devices serving 3,000 servers.

Also, the company is building a 400 Gb Ethernet data center fabric in Japan that will run only Sonic. The network will eventually provide connectivity to 10,000 servers.

One of Criteo’s most significant problems is getting support for low-level issues in the open hardware running the NOS. Manufacturers won’t support any software unless required to in the contract.

Therefore, companies should expect difficult negotiations over support for drivers, the software development kit for the ASIC, and the ASIC itself. Other areas of contention include the switch abstraction interface that comes with the device for loading the buyer’s NOS of choice, Soupault said.

“It can be tricky,” he said. “When we asked all these questions to manufacturers, we got some good answers, and some very bad answers, too.”

Soupault stopped short of blaming manufacturers. Buyers and vendors are still struggling with the support model for Sonic. “If we could clarify this area, it might help others on Sonic” and boost adoption, he said.

Network management tools for Sonic are also in their infancy. Within eBay, developers are building agents and processes on the hardware for detecting problems with links and optics, said Parantap Lahiri, vice president of data center engineering at the online marketplace. However, discovering the problems is only the first step — eBay is still working on tools for identifying the root cause of problems.

We hope that the community will come together to build the tools and make the product easier to manage [through] more visibility for the operations teams.
Yiu LeeVice president of network architecture, Comcast

Comcast is developing a repository for streaming network telemetry that network monitoring tools could analyze to pinpoint problems, said Yiu Lee, the company’s vice president of network architecture. However, Comcast could use help from OCP members.

“We hope that the community will come together to build the tools and make the product easier to manage [through] more visibility for the operations teams,” he said.

Some startups are trying to fill the void. Network automation startup Apstra announced at the summit support for Sonic-powered leaf, spine and super-spine switches.

Going slowly with Microsoft Sonic

The panelists advised companies that want to use Sonic to start with a low-risk deployment with a clearly defined use case. They also recommended choosing engineers who are willing to learn different methods for operating a network.

Lahiri from eBay suggested that companies initially deploy Sonic on a single spine within a group. That would provide enough redundancy to overcome a Sonic failure.

Soupault advised designing a network architecture around Sonic. Criteo is using the NOS in an environment similar to that of Facebook and Microsoft, he said. “Our use case is very close to what Sonic has been built for.”

A company that wants to use the NOS also should be prepared to funnel the money saved from licensing into the hiring of people with the right skill sets, which should include understanding Linux.

Microsoft built Sonic on the open source operating system used mostly in servers. So, engineers have to know how to manage a Linux system and the containers inside it, Lahiri said.

Go to Original Article

Getting a handle on certificate management in Windows shops

Certificate management is one thing that IT pros often forget until an application fails or resources are unavailable because a certificate was not renewed before its expiration date.

Certificates are typically used to identify a webpage as a known site to create an encrypted HTTPS session. Most static webpages don’t use them. With known secure pages, the certificate handling is often done behind the scenes.

Certificates also manage authentication and communication between systems across an organization’s network; a lapsed certificate in your data center can have serious consequences, such as preventing users from logging into Microsoft Exchange to access email and calendars.

As an administrator, the process to check certificates in Windows is easily done by running certmgr.msc at the command prompt to open the Certificates Microsoft Management Console (MMC) snap-in tool.

On the surface, it doesn’t look too difficult to manage certificates, but problems with them have caused some of the largest applications in the world to go offline.

Certificates MMC snap-in tool
The Certificates MMC snap-in tool displays the installed certificates on the current Windows machine.

The most common use of certificates is to establish a secure communication tunnel with a website so both your login information and what you do is hidden from the rest of the internet. For example, when you load LinkedIn, the site uses a certificate to encrypt communication using Secure Sockets Layer between your machine the site.

As you start to look at the websites you visit, you are likely to find many that use login information have certificates to protect your privacy. These certificates are not permanent and they do expire. When I checked, the LinkedIn certificate is due to expire in September. An expired certificate will cause problems. Once you cannot establish a secure connection, a website can simply go dark until the certificate is renewed.

LinkedIn certificate
Like many sites on the internet, LinkedIn uses a certificate to secure the traffic between the site and its users.

While losing LinkedIn might not be drastic, what if it was the certificate to a cloud-based application you use? Or worse yet, what if it was your company’s application and now your customers can’t access their data? An expiring certificate is simple to overlook and problems with certificate management happen to even the largest of companies, including Microsoft. It costs next to nothing to renew these certificates, but once they pass their expiration date, the resulting chaos can cost money and cause embarrassment for the IT staff.

Certificates often remain out of sight, out of mind

One of the main challenges with certificates is they remain hidden in plain sight. They are not complex to deal with and often last several years.

Your IT admins are used to the hustle and critical need of many IT services that remain front of mind. Because certificates last for a long time — often, several years — their importance fades into the background; they fall off the daily list of tasks that must be completed.

It’s easy enough to check the status of your certificates in Windows, but there is no mechanism to alert you about an imminent expiration. For some sites, it’s possible to click past the warning you might see when a certificate has expired; we train our users to avoid these types of potential security risks, so why is it an option to proceed? This practice doesn’t work for other key functions, such as single sign-on; other more automated functions will simply stop working when the certificate expires.

Certificate management issues happen for several reasons

Renewal of certificates is not hard and can be done by even the most junior person on your team, except for one critical piece: You need a company credit card to charge the renewal to, and those are typically not given to junior admins. The stigma of needing to ask permission to use a corporate credit card or wanting to avoid the hassle of getting reimbursed can prevent IT staff from proceeding.

Oftentimes, this certificate task falls outside the realm of IT and into the accounting department. This also means they are the ones who would get the renewal notices, and they may not understand how critical they are until it’s too late.

If both the communication related to and the payment of the certificates is outside of the main IT department, then it’s up to IT to be proactive and stay on top of certificate management. You should not rely on an email or a spreadsheet to track these expiration dates. A group calendar appointment, even years out, still helps, even when turnover occurs. There are also several vendors that offer certificate management add-ons to popular monitoring tools, such as SolarWinds and Quest Software.

While you don’t want to reinvent or deploy large-scale solutions to address certificate management, it’s not something to ignore. They can be at the root of many wide-ranging issues. An expiring certificate is not usually on any type of disaster recovery or backup plan because they are so unique. Look to incorporate certificate monitoring into existing tool sets so your staff has ample time to get them renewed and deployed before your secure connections go offline along with your customers and reputation.

Checking a certificate isn’t hard and the renewal process isn’t difficult, but remembering to stay on top of certificate management continues to evade many IT shops. Another complication is the number of certificates to keep track of. You might have multiple sites, each with its own certificate that are all required to make one application work. It can be very easy to lose track of one, which can then cause a cascade of events that lead to application failure. While co-terming certificates to line up the expiration dates would make the most sense, sometimes that is not possible in every environment.

Go to Original Article

Epicor ERP system focuses on distribution

Many ERP systems try to be all things to all use cases, but that often comes at the expense of heavy customizations.

Some companies are discovering that a purpose-built ERP is a better and more cost-effective bet, particularly for small and midsize companies. One such product is the Epicor ERP system Prophet 21, which is primarily aimed at wholesale distributors.

The functionality in the Epicor ERP system is designed to help distributors run processes more efficiently and make better use of data flowing through the system.

In addition to distribution-focused functions, the Prophet 21 Epicor ERP system includes the ability to integrate value-added services, which could be valuable for distributors, said Mark Jensen, Epicor senior director of product management.

“A distributor can do manufacturing processes for their customers, or rentals, or field service and maintenance work. Those are three areas that we focused on with Prophet 21,” Jensen said.

Prophet 21’s functionality is particularly strong in managing inventory, including picking, packing and shipping goods, as well as receiving and put-away processes.

Specialized functions for distributors

Distribution companies that specialize in certain industries or products have different processes that Prophet 21 includes in its functions, Jensen said. For example, Prophet 21 has functionality designed specifically for tile and slab distributors.

“The ability to be able to work with the slab of granite or a slab of marble — what size it is, how much is left after it’s been cut, transporting that slab of granite or tile — is a very specific functionality, because you’re dealing with various sizes, colors, dimensions,” he said. “Being purpose-built gives [the Epicor ERP system] an advantage over competitors like Oracle, SAP, NetSuite, [which] either have to customize or rely on a third-party vendor to attach that kind of functionality.”

Jergens Industrial Supply, a wholesale supplies distributor based in Cleveland, has improved efficiency and is more responsive to shifting customer demands using Prophet 21, said Tony Filipovic, Jergens Industrial Supply (JIS) operations manager.

We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case.
Tony FilipovicOperations manager, Jergens Industrial Supply

“We like Prophet 21 because it’s geared toward distribution and was the leading product for distribution,” Filipovic said. “We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case. Prophet 21 is something that’s been top of line for years for resources distribution needs.”

One of the key differentiators for JIS was Prophet 21’s inventory management functionality, which was useful because distributors manage inventory differently than manufacturers, Filipovic said.

“All that functionality within that was key, and everything is under one package,” he said. “So from the moment you are quoting or entering an order to purchasing the product, receiving it, billing it, shipping it and paying for it was all streamlined under one system.”

Another key new feature is an IoT-enabled button similar to Amazon Dash buttons that enables customers to resupply stocks remotely. This allows JIS to “stay ahead of the click” and offer customers lower cost and more efficient delivery, Filipovic said.

“Online platforms are becoming more and more prevalent in our industry,” he said. “The Dash button allows customers to find out where we can get into their process and make things easier. We’ve got the ordering at the point where customers realize that when they need to stock, all they do is press the button and it saves multiple hours and days.”

Epicor Prophet 21 a strong contender in purpose-built ERP

Epicor Prophet 21 is on solid ground with its purpose-built ERP focus, but companies have other options they can look at, said Cindy Jutras, president of Mint Jutras, an ERP research and advisory firm in Windham, NH.

“Epicor Prophet 21 is a strong contender from a feature and function standpoint. I’m a fan of solutions that go that last mile for industry-specific functionality, and there aren’t all that many for wholesale distribution,” Jutras said. “Infor is pretty strong, NetSuite plays here, and then there a ton of little guys that aren’t as well-known.”

Prophet 21 may take advantage of new cloud capabilities to compete better in some global markets, said Predrag Jakovljevic, principal analyst at Technology Evaluation Centers, an enterprise computing analysis firm in Montreal.

“Of course a vertically-focused ERP is always advantageous, and Prophet 21 and Infor SX.e go head-to-head all the time in North America,” Jakovljevic said. “Prophet 21 is now getting cloud enabled and will be in Australia and the UK, where it might compete with NetSuite or Infor M3, which are global products.”

Go to Original Article

Why reducing hiring bias isn’t easy

Hiring bias — often unconscious — is one very pervasive issue that gets in the way of diversity and inclusion initiatives.

As the gatekeepers to employment, HR teams must recognize their biases. Recruiters form both conscious and unconscious biases when seeking out new candidates and may miss out on hiring someone who would excel within the company. Vendors promise AI technology and software can help fix this hiring bias, but it may not always help solve the problem.

In this Q&A, Stacia Sherman Garr, co-founder and head analyst at RedThread Research, discusses her thoughts on diversity and inclusion obstacles, why hiring biases exist and whether AI can fix this issue. 

Can you define diversity and inclusion?

Stacia Sherman GarrStacia Sherman Garr

Stacia Sherman Garr: Diversity is a variation in backgrounds, beliefs and experiences, with respect to gender, race, ethnicity, language and mental abilities. In a simplified version, there is visible diversity [such as gender], which we tend to see as being legally protected; and then invisible diversity [such as sexual orientation and class], which tends to be more of those things that are not immediately obvious but still influences people’s perspectives.

Inclusion is about allowing the equitable and fair distribution of resources within an organization, which allow all employees to be appreciated for their unique contributions and to feel they belong to the formal as well as informal networks within it.

What gets in the way of companies reaching this goal?

It’s not enough to just have an employee resource group; you actually need to be reinforcing a diverse, inclusive way of thinking.
Stacia Sherman GarrCo-founder and head analyst, RedThread Research

Garr: One of the biggest things is that, fundamentally, this is about the culture that an organization has, and changing a culture is difficult. When you think about it from that perspective, it means that diversity and inclusion can’t just be an on-the-side initiative. It’s not enough to just have an employee resource group; you actually need to be reinforcing a diverse, inclusive way of thinking. This factors into how HR teams acquire talent, promote employees, give feedback and coach people, so it is systemic change and that’s why it is hard. 

Why is there a sudden demand for diversity and inclusion?

Garr: Demographics, particularly in the United States, have been changing. We see an age demographic shift and an ethnicity demographic shift. Younger people, or people from underrepresented groups, are more likely to bring up their perspectives. They’re more likely to push issues and are less afraid to do it in a work context than the previous generations.

The second reason is that overall work has globalized, so we’re seeing workplaces becoming more multicultural with more freelancers or virtual work. 

Third, there is a relationship between diversity and inclusion and business outcomes. There’s research that shows this, so organizations are seeing the connection between diversity and inclusion and financial goals.   

Finally, #MeToo brought this all to a head. It wasn’t just that #MeToo was about sexual harassment. What #MeToo underscored for HR leaders, in particular, was the role of culture with regard to people being treated fairly and equitably in our workplace. If you look at the numbers of underrepresented groups in leadership, you see we’ve been working on this problem for years but things haven’t shifted. The combination of the heightened focus and frustration, plus the technological advances [such as in AI] are why we’ve seen a lot of the technology come to the fore.

What are your thoughts on the hiring bias as part of diversity and inclusion?

Garr: People aren’t necessarily conscious of bias, which makes this issue pretty complex. It could certainly be that every human has a bias toward people who look and talk and think like they do and that can seep into the hiring processes, whether it’s the recruiter or the hiring manager. It can be conscious in that they may think, “I have seen X type of person from Y university succeed here, therefore I think that that’s what we need in this role,” where that may not be the case in terms of those being the necessary factors to result in success.

When we then translate that into this conversation about AI, it’s important to note that AI is just advanced math. All it’s doing is pattern recognition and learning from previous patterns. If we as humans have had a challenge with bias in the past, then a technology whose sole job is to look at our patterns of the past and deduce from that is going to extrapolate some level of bias, if it goes unchecked.

Software vendors market products that would “fix” this hiring bias. What are your thoughts on that?

Garr: I do not believe it’s possible to completely eradicate all biases. I think that there are ways to reduce the biases that exist. With additional analysis capability we can see some of the things that humans have done that have bias within them, or indicated bias on a systemic level, and address those. It should be the obligation of vendors to work on this, and they should be very transparent about what they’re doing to address it. But I am skeptical of any vendor that says it has wholly eliminated bias.

In 2018, Amazon had to scrap an AI recruiting tool that showed bias against women. What does that say about the use of AI to improve diversity?

Garr: What it tells us first and foremost is that we shouldn’t allow engineers to run rampant without HR intervention. In that instance, HR was actually not at all a part of that technology development; it was done by a bunch of engineers in the business. It also underscores the importance of oversight and testing. Once developers have built something, it needs to go through rigorous tests to understand, “Is there bias here, and if there is, how can we address it?” Technology shows that there’s been an event in the past and we need to have some way of foreseeing that for the future. Unfortunately, it’s been painted as such, but I do not think it should be used to cast all AI in a negative light.

How important is cognitive diversity today in relation to this hiring bias?

Garr: It is a way for us to bring in different perspectives, to push for new ideas and to build things that haven’t been there before. Much innovation comes from the intersection of more than two existing knowledge bases that people haven’t combined before, and that is fundamentally cognitive diversity. But it’s important to not forget other diversity. There’s actually been some studies that show just by having visibly diverse individuals, it actually forces the other people in the group to take a different perspective. So cognitive diversity is important, but we shouldn’t forget visible diversity as well.

Let’s say companies have hired these diverse employees; what comes next in terms of inclusion?

Garr: Making sure that the organization has a culture that’s open to diverse perspectives. There’s a number of organizations that are using organizational network analysis to understand how to connect people into the organizations that works more effectively. And then there’s all sorts of tools that are available. Historical diversity tools such as employee resource groups or play action committees can help with some of this.

Then there is taking a hard look at all the various talent, practices and processes, and adjusting the organization’s approach so that they are open and aware of what’s necessary from an inclusion perspective. Heightening people’s awareness through all the different practices of an organization of what they need to do to be inclusive is really important.

Go to Original Article

How to repair Windows Server using Windows SFC and DISM

Over time, system files in a Windows Server installation might require a fix. You can often repair the operating…

system without taking the server down by using Windows SFC or the more robust and powerful Deployment Image Servicing and Management commands.

Windows System File Checker (SFC) and Deployment Image Servicing and Management (DISM) are administrative utilities that can alter system files, so they must be run in an administrator command prompt window.

Start with Windows SFC

The Windows SFC utility scans and verifies version information, file signatures and checksums for all protected system files on Windows desktop and server systems. If the command discovers missing protected files or alterations to existing ones, Windows SFC will attempt to replace the altered files with a pristine version from the %systemroot%system32dllcache folder.

The system logs all activities of the Windows SFC command to the %Windir%CBSCBS.log file. If the tool reports any nonrepairable errors, then you’ll want to investigate further. Search for the word corrupt to find most problems.

Windows SFC command syntax

Open a command prompt with administrator rights and run the following command to start the file checking process:

C:WindowsSystem32>sfc /scannow

The /scannow parameter instructs the command to run immediately. It can take some time to complete — up to 15 minutes on servers with large data drives is not unusual — and usually consumes 60%-80% of a single CPU for the duration of its execution. On servers with more than four cores, it will have a slight impact on performance.

Windows SFC scannow command
The Windows SFC /scannow command examines protected system files for errors.

There are times Windows SFC cannot replace altered files. This does not always indicate trouble. For example, recent Windows builds have included graphics driver data that was reported as corrupt, but the problem is with Windows file data, not the files themselves, so no repairs are needed.

If Windows SFC can’t fix it, try DISM

The DISM command is more powerful and capable than Windows SFC. It also checks a different file repository — the %windir%WinSXS folder, aka the “component store” — and is able to obtain replacement files from a variety of potential sources. Better yet, the command offers a quick way to check an image before attempting to diagnose or repair problems with that image.

Run DISM with the following parameters:

C:WindowsSystem32>dism /Online /Cleanup-Image /CheckHealth

Even on a server with a huge system volume, this command usually completes in less than 30 seconds and does not tax system resources. Unless it finds some kind of issue, the command reports back “No component store corruption detected.” If the command finds a problem, this version of DISM reports only that corruption was detected, but no supporting details.

Corruption detected? Try ScanHealth next

If DISM finds a problem, then run the following command:

C:WindowsSystem32>dism /Online /Cleanup-Image /ScanHealth

This more elaborate version of the DISM image check will report on component store corruption and indicate if repairs can be made.

If corruption is found and it can be repaired, it’s time to fire up the /RestoreHealth directive, which can also work from the /online image, or from a different targeted /source.

Run the following commands using the /RestoreHealth parameter to replace corrupt component store entries:

C:WindowsSystem32>dism /Online /Cleanup-Image /RestoreHealth

C:WindowsSystem32>dism /source: /Cleanup-Image /RestoreHealth

You can drive file replacement from the running online image easily with the same syntax as the preceding commands. But it often happens that local copies aren’t available or are no more correct than the contents of the local component store itself. In that case, use the /source directive to point to a Windows image file — a .wim file or an .esd file — or a known, good, working WinSXS folder from an identically configured machine — or a known good backup of the same machine to try alternative replacements.

By default, the DISM command will also try downloading components from the Microsoft download pages; this can be turned off with the /LimitAccess parameter. For details on the /source directive syntax, the TechNet article “Repair a Windows Image” is invaluable.

DISM is a very capable tool well beyond this basic image repair maneuver. I’ve compared it to a Swiss army knife for maintaining Windows images. Windows system admins will find DISM to be complex and sometimes challenging but well worth exploring.

Go to Original Article

Gartner Names Microsoft a Leader in the 2019 Enterprise Information Archiving (EIA) Magic Quadrant – Microsoft Security

We often hear from customers about the explosion of data, and the challenge this presents for organizations in remaining compliant and protecting their information. We’ve invested in capabilities across the landscape of information protection and information governance, inclusive of archiving, retention, eDiscovery and communications supervision. In Gartner’s annual Magic Quadrant for Enterprise Information Archiving (EIA), Microsoft was named a Leader again in 2019.

According to Gartner, “Leaders have the highest combined measures of Ability to Execute and Completeness of Vision. They may have the most comprehensive and scalable products. In terms of vision, they are perceived to be thought leaders, with well-articulated plans for ease of use, product breadth and how to address scalability.” We believe this recognition represents our ability to provide best-in-class protection and deliver on innovations that keep pace with today’s compliance needs.

This recognition comes at a great point in our product journey. We are continuing to invest in solutions that are integrated into Office 365 and address information protection and information governance needs of customers. Earlier this month, at our Ignite 2019 conference, we announced updates to our compliance portfolio including new data connectors, machine learning powered governance, retention, discovery and supervision – and innovative capabilities such as threading Microsoft Teams or Yammer messages into conversations, allowing you to efficiently review and export complete dialogues with context, not just individual messages. In customer conversations, many of them say these are the types of advancements that are helping them be more efficient with their compliance requirements, without impacting end-user productivity.

Learn more

Read the complimentary report for the analysis behind Microsoft’s position as a Leader.

For more information about our Information Archiving solution, visit our website and stay up to date with our blog.

Gartner Magic Quadrant for Enterprise Information Archiving, Julian Tirsu, Michael Hoeck, 20 November 2019.

*This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Go to Original Article
Author: Steve Clarke

Kasten backup aims for secure Kubernetes protection

People often talk about Kubernetes “Day 1,” when you get the platform up and running. Now Kasten wants to help with “Day 2.”

Kasten’s K10 is a data management and backup platform for Kubernetes. The latest release, K10 2.0, focuses on security and simplicity.

K10 2.0 includes support for Kubernetes authentication, role-based access control, OpenID Connect, AWS Identity and Access Management roles, customer-managed keys, and integrated encryption of artifacts at rest and in flight.

“Once you put data into storage, the Day 2 operations are critical,” said Krishnan Subramanian, chief research advisor at Rishidot Research. “Day 2 is as critical as Day 1.”

Day 2 — which includes data protection, mobility, backup and restore, and disaster recovery — is becoming a pain point for Kubernetes users, Kasten CEO Niraj Tolia said.

“In 2.0, we are focused on making Kubernetes backup easy and secure,” Tolia said.

Other features the new Kasten backup software offers, which became generally available earlier in November, include a Kubernetes-native API, auto-discovery of the application environment, policy-driven operations, multi-tenancy support, and advanced logging and monitoring. The Kasten backup enables teams to operate their environments, while supporting developers’ ability to use tools of their choice, according to the vendor.

Kasten K10 dashboard screenshot
Kasten K10 provides data management and backup for Kubernetes.

Kasten backup eyes market opportunity

Kasten, which launched its original product in December 2017, generally releases an update to its customers every two weeks. A typical update that’s not as major as 2.0 typically has bug fixes, new features and increased depth in current features. Tolia said there were 55 releases between 1.0 and 2.0.

Day 2 is as critical as Day 1.
Krishnan SubramanianFounder and chief research advisor, Rishidot Research

Backup for container storage has become a hot trend in data protection. Kubernetes specifically is an open source system used to manage containers across private, public and hybrid cloud environments. Kubernetes can be used to manage microservice architectures and is deployable on most cloud providers.

“Everyone’s waking up to the fact that this is going to be the next VMware,” as in, the next infrastructure of choice, Tolia said.

Kubernetes backup products are popping up, but it looks like Kasten is a bit ahead of its time, Rishidot’s Subramanian said. He said he is seeing more enterprises using Kubernetes in production, for example, in moving legacy workloads to the platform, and that makes backup a critical element.

“Kubernetes is just starting to take off,” Subramanian said.

Kubernetes backup “has really taken off in the last two or three quarters,” Tolia said.

Subramanian said he is starting to see legacy vendors such as Dell EMC and NetApp tackling Kubernetes backup, as well as smaller vendors such as Portworx and Robin. He said Kasten had needed stronger security but caught up with K10 2.0. Down the road, he said he will look for Kasten to improve its governance and analytics.

Tolia said Kasten backup stands out because it’s “purpose-built for Kubernetes” and extends into multilayered data management.

In August, Kasten, which is based in Los Altos, Calif., closed a $14 million Series A funding round, led by Insight Partners. Tolia did not give Kasten’s customer count but said it has deployments across multiple continents.

Go to Original Article

How to achieve explainability in AI models

When machine learning models deliver problematic results, it can often happen in ways that humans can’t make sense of — and this becomes dangerous when there are no limitations of the model, particularly for high-stakes decisions. Without straightforward and simple tools that highlight explainability in AI models, organizations will continue to struggle in implementing AI algorithms. Explainable AI refers to the process of making it easier for humans to understand how a given model generates the results it does and planning for cases when the results should be second-guessed.

AI developers need to incorporate explainability techniques into their workflows as part of their overall modeling operations. AI explainability can refer to the process of creating algorithms for teasing apart how black box models deliver results or the process of translating these results to different types of people. Data science managers working on explainable AI should keep tabs on the data used in models, strike a balance between accuracy and explainability, and focus on the end user.

Opening the black box

Traditional rule-based AI systems included explainability in AI as part of models, since humans would typically handcraft the inputs to output. But deep learning techniques using semi-autonomous neural-network models can’t provide a model’s results map to an intended goal.

Researchers are working to build learning algorithms that generate explainable AI systems from data. Currently, however, most of the dominant learning algorithms do not yield interpretable AI systems, said Ankur Taly, head of data science at Fiddler Labs, an explainable AI tools provider.

“This results in black box ML techniques, which may generate accurate AI systems, but it’s harder to trust them since we don’t know how these systems’ outputs are generated,” he said. 

AI explainability often describes post-hoc processes that attempt to explain the behavior of AI systems, rather than alter their structure. Other machine learning model properties like accuracy are straightforward to measure, but there are no corresponding simple metrics for explainability. Thus, the quality of an explanation or interpretation of an AI system needs to be assessed in an application-specific manner. It’s also important for practitioners to understand the assumptions and limitations of the techniques they use for implementing explainability.

“While it is better to have some transparency rather than none, we’ve seen teams fool themselves into a false sense of security by wiring an off-the-shelf technique without understanding how the technique works,” Taly said. 

Start with the data

The results of a machine learning model could be explained by the training data itself, or how a neural network interprets a dataset. Machine learning models often start with data labeled by humans. Data scientists can sometimes explain the way a model is behaving by looking at the data it was trained on.

“What a particular neural network derives from a dataset are patterns that it finds that may or may not be obvious to humans,” said Aaron Edell, director of applied AI at AI platform Veritone.

But it can be hard to understand what good data looks like. Biased training data can show in up a variety of ways. A machine learning model trained to identify sheep might only come from pictures of farms, causing the model to misinterpret sheep in other settings, or white clouds on farm pictures as sheep. Facial recognition software can be trained on company faces — but if those faces are mostly male or white, the data is biased.

One good practice is to train machine learning models on data that should be indistinguishable from the data the model will be expected to run on. For example, a face recognition model that identified how long Jennifer Aniston appears in every episode of Friends should be trained on frames of actual episodes rather than Google image search results for ‘Jennifer Aniston.’ In a similar vein, it’s OK to train models on publicly available datasets, but generic pre-trained models as a service will be harder to explain and change if necessary.   

Balancing explainability, accuracy and risk

The real problem with implementing explainability in AI is that there are major trade-offs between accuracy, transparency and risk in different types of AI models, said Matthew Nolan, senior director of decision sciences at Pegasystems. More opaque models may be more accurate, but fail the explainability test. Other types of models like decision trees and Bayesian networks are considered more transparent but are less powerful and complex.

“These models are critical today as businesses deal with regulations such as like GDPR that require explainability in AI-based systems, but this sometimes will sacrifice performance,” said Nolan.

Focusing on transparency can cost a business, but turning to more opaque models can leave a model unchecked and might expose the consumer, customer and the business to additional risks or breaches.

To address this gap, platform vendors are starting to embed transparency settings into their AI tool sets. This can make it easier to companies to adjust the acceptable amount of opaqueness or transparency thresholds used in their AI models and gives enterprises the control to adjust the models based on their needs or on corporate governance policy so they can manage risk, maintain regulatory compliance and ensure customers a differentiated experience in a responsible way.

Data scientists should also identify when the complexity of new models are getting in the way of explainability. Yifei Huang, data science manager at sales engagement platform Outreach, said there are often simpler models available for attaining the same performance, but machine learning practitioners have a tendency toward using more fancy and advanced models.

Focus on the user

Explainability means different things to a highly skilled data scientist compared to a call center worker who may need to make decisions based on an explanation. The task of implementing explainable AI is not just to foster trust in explanations but also help the end users make decisions, said Ankkur Teredesai, CTO and co-founder at KenSci, an AI healthcare platform.

Often data scientists make the mistake of thinking about explanations from the perspective of a computer scientist, when the end user is a domain expert who may need just enough information to make a decision. For a model that predicts the risk of a patient being readmitted, a physician may want an explanation of the underlying medical reasons, while a discharge planner may want to know the likelihood of readmission to plan accordingly.

Teredesai said there is still no general guideline for explainability, particularly for different types of users. It’s also challenging to integrate these explanations into the machine learning and end user workflows. End users typically need explanations as possible actions to take based on a prediction rather than just explanation as reasons, and this requires striking the right balance between focusing on prediction and explanation fidelity.

There are a variety of tools for implementing explainability on top of machine learning models which generate visualizations and technical descriptions, but these can be difficult for end users to understand, said Jen Underwood, vice president of product management at Aible, an automated machine learning platform. Supplementing visualizations with natural language explanations is a way to partially bridge the data science literacy gap. Another good practice is to directly use humans in the loop to evaluate your explanations to see if they make sense to a human, said Daniel Fagnan, director of applied science on the Zillow Offers Analytics team. This can help lead to more accurate models through key improvements including model selection and feature engineering.

KPIs for AI risks

Enterprises should consider the specific reasons that explainable AI is important when looking towards how to measure explainability and accessibility. Teams should first and foremost establish a set of criteria for key AI risks including robustness, data privacy, bias, fairness, explainability and compliance, said Dr. Joydeep Ghosh, chief scientific officer at AI vendor CognitiveScale. It’s also useful to generate appropriate metrics for key stakeholders relevant to their needs.

External organizations like AI Global can help establish measurement targets that determine acceptable operating values. AI Global is a nonprofit organization that has established the AI Trust Index, a scoring benchmarks for explainable AI that is like a FICO score. This enables firms to not only establish their own best practices, but also compare the enterprise against industry benchmarks.

When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application.
Mark StefikResearch Fellow, PARC, a Xerox Company

Vendors are starting to automate this process with tools for automatically scoring, measuring and reporting on risk factors across the AI operations lifecycle based on the AI Trust Index. Although the tools for explainable AI are getting better, the technology is at an early research stage with proof-of-concept prototypes, cautioned Mark Stefik, a research fellow at PARC, a Xerox Company. There are substantial technology risks and gaps in machine learning and in AI explanations, depending on the application.

“When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application,” Stefik said.

Go to Original Article

Azure AD + F5—helping you secure all your applications

7 hours ago

Howdy folks,

We often hear from our customers about the complexities around providing seamless and secure user access to their applications—from cloud SaaS applications to legacy on-premises applications. Based on your feedback, we’ve worked to securely connect any app, on any cloud or server—through a variety of methods. And today, I’m thrilled to announce our deep integration with F5 Networks that simplifies secure access to your legacy applications that use protocols like header-based and Kerberos authentication.

By centralizing access to all your applications, you can leverage all the benefits that Azure AD offers. Through the F5 and Azure AD integration, you can now protect your legacy-auth based applications by applying Azure AD Conditional Access policies to leverage our Identity Protection engine to detect user risk and sign-in risk, as well as manage and monitor access through our identity governance capabilities. Your users can also gain single sign-on (SSO) and use passwordless authentication to these legacy-auth based applications.

To help you get started, we made it easier to publish these legacy-auth based applications by making the F5-BIG IP Application Policy Manager available in the Azure AD app gallery. You can learn how to configure your legacy-auth based applications by reviewing our documentation below based on the app type and scenario:

As always, let us know your feedback, thoughts, and suggestions in the comments below, so we can continue to build capabilities that help you securely connect any app, on any cloud, for every user.

Best regards,

Alex Simons (@Alex_A_Simons)

Corporate VP of Program Management

Microsoft Identity Division

Go to Original Article
Author: Microsoft News Center