Tag Archives: When

Kasten backup aims for secure Kubernetes protection

People often talk about Kubernetes “Day 1,” when you get the platform up and running. Now Kasten wants to help with “Day 2.”

Kasten’s K10 is a data management and backup platform for Kubernetes. The latest release, K10 2.0, focuses on security and simplicity.

K10 2.0 includes support for Kubernetes authentication, role-based access control, OpenID Connect, AWS Identity and Access Management roles, customer-managed keys, and integrated encryption of artifacts at rest and in flight.

“Once you put data into storage, the Day 2 operations are critical,” said Krishnan Subramanian, chief research advisor at Rishidot Research. “Day 2 is as critical as Day 1.”

Day 2 — which includes data protection, mobility, backup and restore, and disaster recovery — is becoming a pain point for Kubernetes users, Kasten CEO Niraj Tolia said.

“In 2.0, we are focused on making Kubernetes backup easy and secure,” Tolia said.

Other features the new Kasten backup software offers, which became generally available earlier in November, include a Kubernetes-native API, auto-discovery of the application environment, policy-driven operations, multi-tenancy support, and advanced logging and monitoring. The Kasten backup enables teams to operate their environments, while supporting developers’ ability to use tools of their choice, according to the vendor.

Kasten K10 dashboard screenshot
Kasten K10 provides data management and backup for Kubernetes.

Kasten backup eyes market opportunity

Kasten, which launched its original product in December 2017, generally releases an update to its customers every two weeks. A typical update that’s not as major as 2.0 typically has bug fixes, new features and increased depth in current features. Tolia said there were 55 releases between 1.0 and 2.0.

Day 2 is as critical as Day 1.
Krishnan SubramanianFounder and chief research advisor, Rishidot Research

Backup for container storage has become a hot trend in data protection. Kubernetes specifically is an open source system used to manage containers across private, public and hybrid cloud environments. Kubernetes can be used to manage microservice architectures and is deployable on most cloud providers.

“Everyone’s waking up to the fact that this is going to be the next VMware,” as in, the next infrastructure of choice, Tolia said.

Kubernetes backup products are popping up, but it looks like Kasten is a bit ahead of its time, Rishidot’s Subramanian said. He said he is seeing more enterprises using Kubernetes in production, for example, in moving legacy workloads to the platform, and that makes backup a critical element.

“Kubernetes is just starting to take off,” Subramanian said.

Kubernetes backup “has really taken off in the last two or three quarters,” Tolia said.

Subramanian said he is starting to see legacy vendors such as Dell EMC and NetApp tackling Kubernetes backup, as well as smaller vendors such as Portworx and Robin. He said Kasten had needed stronger security but caught up with K10 2.0. Down the road, he said he will look for Kasten to improve its governance and analytics.

Tolia said Kasten backup stands out because it’s “purpose-built for Kubernetes” and extends into multilayered data management.

In August, Kasten, which is based in Los Altos, Calif., closed a $14 million Series A funding round, led by Insight Partners. Tolia did not give Kasten’s customer count but said it has deployments across multiple continents.

Go to Original Article
Author:

How to achieve explainability in AI models

When machine learning models deliver problematic results, it can often happen in ways that humans can’t make sense of — and this becomes dangerous when there are no limitations of the model, particularly for high-stakes decisions. Without straightforward and simple tools that highlight explainability in AI models, organizations will continue to struggle in implementing AI algorithms. Explainable AI refers to the process of making it easier for humans to understand how a given model generates the results it does and planning for cases when the results should be second-guessed.

AI developers need to incorporate explainability techniques into their workflows as part of their overall modeling operations. AI explainability can refer to the process of creating algorithms for teasing apart how black box models deliver results or the process of translating these results to different types of people. Data science managers working on explainable AI should keep tabs on the data used in models, strike a balance between accuracy and explainability, and focus on the end user.

Opening the black box

Traditional rule-based AI systems included explainability in AI as part of models, since humans would typically handcraft the inputs to output. But deep learning techniques using semi-autonomous neural-network models can’t provide a model’s results map to an intended goal.

Researchers are working to build learning algorithms that generate explainable AI systems from data. Currently, however, most of the dominant learning algorithms do not yield interpretable AI systems, said Ankur Taly, head of data science at Fiddler Labs, an explainable AI tools provider.

“This results in black box ML techniques, which may generate accurate AI systems, but it’s harder to trust them since we don’t know how these systems’ outputs are generated,” he said. 

AI explainability often describes post-hoc processes that attempt to explain the behavior of AI systems, rather than alter their structure. Other machine learning model properties like accuracy are straightforward to measure, but there are no corresponding simple metrics for explainability. Thus, the quality of an explanation or interpretation of an AI system needs to be assessed in an application-specific manner. It’s also important for practitioners to understand the assumptions and limitations of the techniques they use for implementing explainability.

“While it is better to have some transparency rather than none, we’ve seen teams fool themselves into a false sense of security by wiring an off-the-shelf technique without understanding how the technique works,” Taly said. 

Start with the data

The results of a machine learning model could be explained by the training data itself, or how a neural network interprets a dataset. Machine learning models often start with data labeled by humans. Data scientists can sometimes explain the way a model is behaving by looking at the data it was trained on.

“What a particular neural network derives from a dataset are patterns that it finds that may or may not be obvious to humans,” said Aaron Edell, director of applied AI at AI platform Veritone.

But it can be hard to understand what good data looks like. Biased training data can show in up a variety of ways. A machine learning model trained to identify sheep might only come from pictures of farms, causing the model to misinterpret sheep in other settings, or white clouds on farm pictures as sheep. Facial recognition software can be trained on company faces — but if those faces are mostly male or white, the data is biased.

One good practice is to train machine learning models on data that should be indistinguishable from the data the model will be expected to run on. For example, a face recognition model that identified how long Jennifer Aniston appears in every episode of Friends should be trained on frames of actual episodes rather than Google image search results for ‘Jennifer Aniston.’ In a similar vein, it’s OK to train models on publicly available datasets, but generic pre-trained models as a service will be harder to explain and change if necessary.   

Balancing explainability, accuracy and risk

The real problem with implementing explainability in AI is that there are major trade-offs between accuracy, transparency and risk in different types of AI models, said Matthew Nolan, senior director of decision sciences at Pegasystems. More opaque models may be more accurate, but fail the explainability test. Other types of models like decision trees and Bayesian networks are considered more transparent but are less powerful and complex.

“These models are critical today as businesses deal with regulations such as like GDPR that require explainability in AI-based systems, but this sometimes will sacrifice performance,” said Nolan.

Focusing on transparency can cost a business, but turning to more opaque models can leave a model unchecked and might expose the consumer, customer and the business to additional risks or breaches.

To address this gap, platform vendors are starting to embed transparency settings into their AI tool sets. This can make it easier to companies to adjust the acceptable amount of opaqueness or transparency thresholds used in their AI models and gives enterprises the control to adjust the models based on their needs or on corporate governance policy so they can manage risk, maintain regulatory compliance and ensure customers a differentiated experience in a responsible way.

Data scientists should also identify when the complexity of new models are getting in the way of explainability. Yifei Huang, data science manager at sales engagement platform Outreach, said there are often simpler models available for attaining the same performance, but machine learning practitioners have a tendency toward using more fancy and advanced models.

Focus on the user

Explainability means different things to a highly skilled data scientist compared to a call center worker who may need to make decisions based on an explanation. The task of implementing explainable AI is not just to foster trust in explanations but also help the end users make decisions, said Ankkur Teredesai, CTO and co-founder at KenSci, an AI healthcare platform.

Often data scientists make the mistake of thinking about explanations from the perspective of a computer scientist, when the end user is a domain expert who may need just enough information to make a decision. For a model that predicts the risk of a patient being readmitted, a physician may want an explanation of the underlying medical reasons, while a discharge planner may want to know the likelihood of readmission to plan accordingly.

Teredesai said there is still no general guideline for explainability, particularly for different types of users. It’s also challenging to integrate these explanations into the machine learning and end user workflows. End users typically need explanations as possible actions to take based on a prediction rather than just explanation as reasons, and this requires striking the right balance between focusing on prediction and explanation fidelity.

There are a variety of tools for implementing explainability on top of machine learning models which generate visualizations and technical descriptions, but these can be difficult for end users to understand, said Jen Underwood, vice president of product management at Aible, an automated machine learning platform. Supplementing visualizations with natural language explanations is a way to partially bridge the data science literacy gap. Another good practice is to directly use humans in the loop to evaluate your explanations to see if they make sense to a human, said Daniel Fagnan, director of applied science on the Zillow Offers Analytics team. This can help lead to more accurate models through key improvements including model selection and feature engineering.

KPIs for AI risks

Enterprises should consider the specific reasons that explainable AI is important when looking towards how to measure explainability and accessibility. Teams should first and foremost establish a set of criteria for key AI risks including robustness, data privacy, bias, fairness, explainability and compliance, said Dr. Joydeep Ghosh, chief scientific officer at AI vendor CognitiveScale. It’s also useful to generate appropriate metrics for key stakeholders relevant to their needs.

External organizations like AI Global can help establish measurement targets that determine acceptable operating values. AI Global is a nonprofit organization that has established the AI Trust Index, a scoring benchmarks for explainable AI that is like a FICO score. This enables firms to not only establish their own best practices, but also compare the enterprise against industry benchmarks.

When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application.
Mark StefikResearch Fellow, PARC, a Xerox Company

Vendors are starting to automate this process with tools for automatically scoring, measuring and reporting on risk factors across the AI operations lifecycle based on the AI Trust Index. Although the tools for explainable AI are getting better, the technology is at an early research stage with proof-of-concept prototypes, cautioned Mark Stefik, a research fellow at PARC, a Xerox Company. There are substantial technology risks and gaps in machine learning and in AI explanations, depending on the application.

“When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application,” Stefik said.

Go to Original Article
Author:

Insurance company uses Cohesity backup to wean off tape

Change is hard, especially when it comes to moving from legacy backup to modern methods.

Insurance company Illinois Mutual, based in Peoria, Ill., adopted cloud-based Cohesity backup and has been transitioning off its old, tape-based system. The switch seemed like it should’ve been an easy decision, but Shawn Gifford, senior infrastructure administrator and infrastructure team lead at Illinois Mutual, said the company was resistant to change when he joined four years ago.

“When I came to the company, the cloud was regarded as somewhat of a bad word,” Gifford said. He was hired to change that culture.

Gifford had experience in cloud consulting, and he had seen his share of failed cloud endeavors before his time at Illinois Mutual. He said the key to both avoiding failure and convincing people to buy in is a detailed cost-benefit analysis.

Gifford calculated a lower total cost of ownership and other cost savings with Cohesity backup, but also went beyond raw financials.

He helped build his case by emphasizing it would reduce the time staff would have to spend working with backups, as well as lower backup and restore times. Other selling points were that Cohesity put deduplication and encryption in the same layer, and consolidates hardware and software.

Headshot of Shawn GiffordShawn Gifford

Illinois Mutual has a staff of around 200 at its Peoria headquarters and about 60 sales representatives working remotely. Its main data center is in Peoria, but the company also replicates to a colocation site in Bloomington, Ill. In total, the company is protecting 130 TB of data. Much of that data is important insurance information with a retention period of 25 years.

Gifford said the toughest part of modernizing Illinois Mutual’s backup wasn’t finding a vendor with the right technology or working out a cost strategy. For him, the biggest challenge was cutting through a company culture that was set in its ways. Much of IT was used to maintaining status quo rather than examining what processes could be improved.

“Instead of a swear jar, we have a, ‘That’s the way it’s always been’ jar,” Gifford said, on the company’s resistance to change.

Instead of a swear jar, we have a, ‘That’s the way it’s always been’ jar.
Shawn GiffordSenior infrastructure administrator and infrastructure team lead, Illinois Mutual

Illinois Mutual adopted Cohesity backup in October 2018 following a 60-day proof of concept. Gifford said he spends about half as much time as he used to on backup administration since the switch, and called the period before Cohesity, “painful years.” Not only did backup processes take longer with tape, but he estimated a full restore in case of a major outage could take as long as a month.

To illustrate how much of an improvement the new backup system is, Gifford told the story of a mishap during a backup restoration test. During this test, a new database administrator made an error and sent backup jobs to a remote location. Even though the system was restoring from that remote site instead of local, the test took less than 30 minutes, far less than Gifford said it would’ve taken with the previous setup.

When shopping for a new backup vendor to replace its Commvault and NetApp combination, Illinois Mutual also looked at Rubrik and Veeam. Gifford needed a product that could support tape while he transitioned off it.

“We were so entrenched in tape. We couldn’t just switch data protection and roll off tape immediately,” Gifford said.

Illinois Mutual is also entrenched in Microsoft products, using its Hyper-V hypervisor over VMware. This created an additional challenge for Gifford. He ultimately chose Cohesity backup because he found few vendors supported both Hyper-V and tape.

The insurance company’s Microsoft investment continues to grow, as it is currently in the middle of migrating from Skype for Business to Microsoft Teams. Gifford said Cohesity’s Office 365 backup capabilities were also selling points for him, and he’s looking forward to the vendor providing Teams support in the future.

Gifford said Illinois Mutual has always had a deep partnership with Microsoft and its products, then immediately noted he owed money to the jar. But at this point, the company is so invested in Hyper-V and other Microsoft products that there wasn’t a good reason to switch out. Going through a single vendor for multiple IT needs just makes more sense.

“Not all their products are perfect, of course, but we’re not decreasing our [Microsoft] footprint,” Gifford said.

Go to Original Article
Author:

‘No matter where we are in the world, this will always be our home.’

Adonis grew up at Taos Pueblo, but his parents sent him to school in Tucson when he was transitioning from middle school to high school. Every summer, he’d return to Taos Pueblo and fall in love with the land more and more.

But spending the school year elsewhere gave him a different perspective when he returned. He wanted to do something with the skills he was gaining every year to help Taos Pueblo keep growing and thriving. This desire eventually solidified into his life’s purpose: to serve indigenous communities. What wasn’t as clear was how he’d do that.

His path led him to graduate school to study business, with the idea that a business degree would provide many useful tools and bring multiple opportunities to fulfil this purpose.

A side-by-side photo of a boy playing outside and playing baseball

During his time in grad school, he worked on a project for Microsoft and was introduced to one of the project sponsors, Mike Miles. Mike was just putting together a new team at the company, now the Talent Workforce and Community Development Team, whose goal was to build and nurture relationships with the local communities that hosted Microsoft’s datacenters. Adonis took interest in the project because he observed a genuine interest in the company to create meaningful and long-term impact—something that at the time was counterintuitive to his perspective of corporate America.

Over a few months of working with Mike, who Adonis describes as a visionary and authentic leader, Adonis began to soften to the idea of corporate work, especially if it meant he had a way in to develop relationships with local and potentially native peoples. A year later, he started working on Mike’s team.

In addition, Adonis has taken a voluntary leadership role to organize with a group of indigenous employees who work at Microsoft to create their own official employee resource group inside the company.

“I see myself building on top of all of this work to continue to serve indigenous peoples,” he says.

Go to Original Article
Author: Microsoft News Center

Google-Ascension deal reveals murky side of sharing health data

One of the largest nonprofit health systems in the U.S. created headlines when it was revealed that it was sharing patient data with Google — under codename Project Nightingale.

Ascension, a Catholic health system based in St. Louis, partnered with Google to transition the health system’s infrastructure to the Google Cloud Platform, to use the Google G Suite productivity and collaboration tools, and to explore the tech giant’s artificial intelligence and machine learning applications. By doing so, it is giving Google access to patient data, which the search giant can use to inform its own products.

The partnership appears to be technically and legally sound, according to experts. After news broke, Ascension released a statement saying the partnership is HIPAA-compliant and a business associate agreement, a contract required by the federal government that spells out each party’s responsibility for protected health information, is in place. Yet reports from The Wall Street Journal and The Guardian about the possible improper transfer of 50 million patients’ data has resulted in an Office for Civil Rights inquiry into the Google-Ascension partnership.

Legality aside, the resounding reaction to the partnership speaks to a lack of transparency in healthcare. Organizations should see the response as both an example of what not to do, as well as a call to make patients more aware of how they’re using health data, especially as consumer companies known for collecting and using data for profit become their partners.

Partnership breeds legal, ethical concerns

Forrester Research senior analyst Jeff Becker said Google entered into a similar strategic partnership with Mayo Clinic in September, and the coverage was largely positive.

Forrester Research senior analyst Jeff Becker Jeff Becker

According to a Mayo Clinic news release, the nonprofit academic medical center based in Rochester, Minn., selected Google Cloud to be “the cornerstone of its digital transformation,” and the clinic would use “advanced cloud computing, data analytics, machine learning and artificial intelligence” to improve healthcare delivery.

But Ascension wasn’t as forthcoming with its Google partnership. It was Google that announced its work with Ascension during a quarterly earnings call in July, and Ascension didn’t issue a news release about the partnership until after the news broke.

“There should have been a public-facing announcement of the partnership,” Becker said. “This was a PR failure. Secrecy creates distrust.”

Matthew Fisher, partner at Mirick O’Connell Attorneys at Law and chairman of its health law group, said the outcry over the Google-Ascension partnership was surprising. For years, tech companies have been trying to get access to patient data to help healthcare organizations and, at the same time, develop or refine their existing products, he said.

“I get the sense that just because it was Google that was announced to have been a partner, that’s what drove a lot of the attention,” he said. “Everyone knows Google mostly for purposes outside of healthcare, which leads to the concern of does Google understand the regulatory obligations and restrictions that come to bear by entering the healthcare space?”

Ascension’s statement in response to the situation said the partnership with Google is covered by a business associate agreement — a distinction Fisher said is “absolutely required” before any protected health information can be shared with Google. Parties in a business associate agreement are obligated by federal regulation to comply with the applicable portions of HIPAA, such as its security and privacy rules.

A business associate relationship allows identifiable patient information to be shared and used by Google only under specified circumstances. It is the legal basis for keeping patient data segregated and restricting Google from freely using that data. According to Ascension, the health system’s clinical data is housed within an Ascension-owned virtual private space in Google Cloud, and Google isn’t allowed to use the data for marketing or research.

“Our data will always be separate from Google’s consumer data, and it will never be used by Google for purposes such as targeting consumers for advertising,” the statement said.

Health IT and information security expert Kate Borten Kate Borten

But health IT and information security expert Kate Borten believes business associate agreements and the HIPAA privacy rule they adhere to don’t go far enough to ensure patient privacy rights, especially when companies like Google get involved. The HIPAA privacy rule doesn’t require healthcare organizations to disclose to patients who they’re sharing patient data with.

“The privacy rule says as long as you have this business associate contract — and business associates are defined by HIPAA very broadly — then the healthcare provider organization or insurer doesn’t have to tell the plan members or the patients about all these business associates who now have access to your data,” she said.

Chilmark Research senior analyst Jody Ranck said much of the alarm over the Google-Ascension partnership may be misplaced, but it speaks to a growing concern about companies like Google entering healthcare.

Since the Office for Civil Rights is looking into the partnership, Ranck said there is still a question of whether the partnership fully complies with the law. But the bigger question has to do with privacy and security concerns around collecting and using patient data, as well as companies like Google using patient data to train AI algorithms and the potential biases it could create.

All of this starts to feel like a bit of an algorithmic iron cage.
Jody RanckSenior analyst, Chilmark Research

Ranck believes consumer trust in tech companies is declining, especially as data privacy concerns get more play.

“Now that they know everything you purchase and they can listen in to that Alexa sitting beside your bed at night, and now they’re going to get access to health data … what’s a consumer to do? Where’s their power to control their destiny when algorithms are being used to assign you as a high-, medium-, or low-risk individual, as creditworthy?” Ranck said. “All of this starts to feel like a bit of an algorithmic iron cage.”

A call for more transparency

Healthcare organizations and big tech partnerships with the likes of Google, Amazon, Apple and Microsoft are growing. Like other industries, healthcare organizations are looking to modernize their infrastructure and take advantage of state of the art storage, security, data analytics tools and emerging tech like artificial intelligence.

But for healthcare organizations, partnerships like these have an added complexity — truly sensitive data. Forrester’s Becker said the mistake in the Google-Ascension partnership was the lack of transparency. There was no press release early on announcing the partnership, laying out what information is being shared, how the information will be used, and what outcome improvements the healthcare organization hopes to achieve.

“There should also be assurance that the partnership falls within HIPAA and that data will not be used for advertising or other commercial activities unrelated to the healthcare ambitions stated,” he said.

Fisher believes the Google-Ascension partnership raises questions about what the legal, moral and ethical aspects of these relationships are. While Ascension and Google may have been legally in the right, Fisher believes it’s important to recognize that privacy expectations are shifting, which calls for better consumer education, as well as more transparency around where and how data is being used.

Although he believes it would be “unduly burdensome” to require a healthcare organization to name every organization it shares data with, Fisher said better education on how HIPAA operates and what it allows when it comes to data sharing, as well as explaining how patient data will be protected when shared with a company like Google, could go a long way in helping patients understand what’s happening with their data.

“If you’re going to be contracting with one of these big-name companies that everyone has generalized concerns about with how they utilize data, you need to be ahead of the game,” Fisher said. “Even if you’re doing everything right from a legal standpoint, there’s still going to be a PR side to it. That’s really the practical reality of doing business. You want to be taking as many measures as you can to avoid the public backlash and having to be on the defensive by having the relationship found out and reported upon or discussed without trying to drive that discussion.”

Go to Original Article
Author:

Contact center agent experience needs massive overhaul

Gone are the days when it is acceptable to have greater than 40% turnover rates among contact center agents.

Leading organizations are revamping the contact center agent experience to improve business metrics, such as operational costs, revenue and customer ratings; and a targeted agent program keeps companies at a competitive advantage, according to the Nemertes 2019-20 Intelligent Customer Engagement research study of 518 organizations.

The problems

CX leaders participating in the research pointed to several issues responsible for a failing contact center agent experience:

  • Low pay. In some organizations it’s at minimum wage, despite requirements for bachelor’s degrees and/or experience.
  • Dead-end job. Organizations typically do not have a growth path for agents. They expect them to last 18 months to two years, and there always will be a revolving door of agents coming and going.
  • Lack of customer context. Agents find it difficult to take pride in their work when they don’t have the right tools. Without CRM integrations, AI assistance and insightful agent desktops, it is difficult to delight customers.
  • Cranky customers. Agents also find it difficult to regularly interact with dissatisfied customers. With a better work environment, more interaction channels, better training, more analytics and context, they could change those attitudes.
  • No coaching. Because supervisors are busy interviewing and hiring to keep backfilling the agents who are leaving, they rarely have time to coach the agents they have. What’s more, they don’t have the analytics tools — from contact center vendors such as Avaya, Cisco, Five 9, Genesys and RingCentral; or from pure-play tools such as Clarabridge, Medallia, and Maritz CM — to provide performance insight.

The enlightenment

Those in the contact center know this has been status quo for decades, but that is starting to change.

One of the big change drivers is the addition of a chief customer officer (CCO). Today, 37% of organizations have a CCO, up from 25% last year. The CCO is an executive-level individual with ultimate responsibility for all customer-facing activities and strategy to maximize customer acquisition, retention and satisfaction.

The CCO has budget, staff and the attention of the entire C-suite. As a result, high agent turnover rates are no longer flying under the radar. After bringing the issue to CEOs and CFOs, they are investing resources into turning around the turnover rates.

Additionally, organizations value contact centers more today, with 61% of research participants say the company views the contact center as a “value center” versus a “cost center.” Four years ago, that figure was reversed, with two-thirds viewing the contact center as a cost center.

Research shows there are five common changes organizations are now making to improve the contact center agent experience and reduce the turnover rate.

Companies are adding more outbound contact centers, targeting sales or proactive customer engagement — such as customer check-ups, loyalty program invitations and discount offers — and they are supporting new products and services. This helps to explain why, despite the growth in self-service and AI-enabled digital channels, 44% of companies actually increased the number of agents in 2019, compared to 13% who decreased, 40% who were flat and 3% unsure.

The solution

Research shows there are five common changes organizations are now making to improve the contact center agent experience and reduce the turnover rate — now at 21%, down from 38% in 2016. These changes include:

  • Improved compensation plan. Nearly 47% of companies are increasing agent compensation, compared to the 7% decreasing it. The increase ranges from 22% to 28%. Average agent compensation is $49,404, with projected increases up to $60,272, minimally, by the end of 2020.
  • Investment in agent analytics. About 24% of companies are using agent analytics today, with another 20.2% planning to use the tools by 2021. Agent analytics provides data on performance to help with coaching and improvement, in addition to delivering real-time screen pops to help agents on the spot during interactions with customers. Those using analytics see a 52.6% improvement in revenue and a 22.7% decrease in operational costs.
  • Increases in coaching. By delivering data from analytics tools, supervisors have a better picture of areas of success and those that need improvement. By using a product such as Intradiem Contact Center RPA, they can automate the scheduling of training and coaching during idle times.
  • Addition of gamification. Agents are inspired with programs that inject competitiveness among agents, by awarding badges for bragging rights, weekly gift cards for top performance and monthly cash bonuses. Such rewards improve their loyalty to the company and reduce turnover.
  • Development of career path. Successful companies are developing a solid career path with escalations into marketing, product development and supervisory roles in the contact center or CX apps/analysis.

Developing a solid game plan that provides agents with the compensation, support and career path they deserve will drastically reduce turnover rates. In a drastic example, one consumer goods manufacturing company reduced agent turnover from 88% to 2% with a program that addressed the aforementioned issues. More typically, companies are seeing 5% to 15% reductions in their turnover rates one year after developing such a plan.

Go to Original Article
Author:

NetApp storage systems growth hinges on cloud transition

NetApp appears to be at a crossroads. Similar to the spot it was in when CEO Tom Georgens was fired in 2015, NetApp is searching for a doorway that leads to positive territory in 2020.

The key to unlocking that door: all-flash NetApp storage systems, coupled with hybrid cloud data management. New CEO George Kurian pivoted NetApp storage hardware to support multi-cloud adoption, and the move fueled a string of strong growth quarters, even as external networked storage sales declined industrywide.

But after closing the gap with market leaders Dell EMC and Hewlett Packard Enterprise, NetApp has given back most of the gains in 2019. The revenue slump continued last quarter.

NetApp Wednesday reported net revenue of $1.37 billion for the quarter that ended Oct. 25, a shade below Wall Street’s consensus of $1.38 billion. NetApp said $20 million of quarterly net revenue from software licenses from a year ago did not recur this quarter, contributing to the miss.

For its full fiscal year, NetApp projects revenue of $5.5 billion, or 8% lower than its initial forecast.

A surge in flash and cloud sales last quarter helped to soften the blow. NetApp’s all-flash revenue jumped 29% last quarter, driven mostly by sales of its All Flash FAS (AFF) systems and NetApp HCI with SolidFire all-flash storage. That’s a turnaround from last quarter, when all-flash sales fell 24%.

NetApp said its cloud products and services generated $300 million, including sales of NetApp HCI, SolidFire all-flash and StorageGrid object storage. NetApp said annual recurring revenue from cloud data services was of $72 million, up 167%.

NetApp CFO Ron Pasek said enterprises increasingly look to NetApp storage systems to underpin hybrid cloud infrastructure.

“We complete the hybrid multi-cloud picture for customers and people are willing to pay for that,” Pasek said in an interview prior to the earnings call.

NetApp customers voice opinions

NetApp has work to do to remake itself as a cloud software company in a crowded field, said Steve McDowell, an analyst at Moor Insights & Strategy, based in Austin, Texas. He said NetApp isn’t selling much storage to new customers and is losing share, despite good technology and innovations such as NetApp Data Fabric.

“NetApp is making a bit of a bet that its cloud revenues will grow enough to offset declining array sales. That market is still very young. There’s enough momentum in the market that they aren’t going out of business, but will it carry them forward as they transition to the new cloud model. That’s the question,” McDowell said.

Customers said NetApp has made strides in recent years, after missing the mark early on all-flash and hyper-converged infrastructure. The emphasis on cloud data mobility makes NetApp “more of a technology company than a storage company,” said Collin Mariner, vice president of data center operations at digital home repair company HomeAdvisor.

“You don’t see a lot of storage companies that are trying to help you get to the cloud. And that’s where Data Fabric comes in,” Mariner said.

The latest extension to Data Fabric is NetApp Keystone consumption pricing, which gives users a variety of deployment options to purchase NetApp storage. PayPal uses hybrid NetApp storage systems and Data Fabric to consolidate its data center footprint, and the company plans to explore NetApp Keystone cloudlike consumption pricing for buying arrays, said Slade Weaver, a senior manager in charge of PayPal’s Core Data Platform.

“NetApp has turned the corner, after missing the boat for a long time,” Weaver said. He said NetApp does still need to develop stronger connections on how its legacy storage arrays transition to the cloud.

NetApp has turned the corner, after missing the boat for a long time.
Slade Weaver Senior manager of Core Data Platform, PayPal

Other customers said the decision to market NetApp storage systems as cloud arrays is emblematic of a broader trend among legacy vendors. Eric Sedore, associate CIO at Syracuse University, said the cloud is forcing storage and other on-premises infrastructure vendors to innovate features more rapidly as they shift focus. The university uses NetApp AFF and FAS arrays  and is considering using Cloud Volumes Ontap in the public cloud or a co-location site for disaster recovery.

“The timeline for adding cutting-edge features has always been slow, but now it’s more a question of ‘Why isn’t IT leading [the way] on meeting our needs? How could we still have problems, if the cloud is supposed to fix everything?'” Sedore said.

“In the past, we would have bought the most reliable [product],” he added. “Now we’re mixing that with ‘What features should we embrace?’ It’s not only having reliability, but [whether] the storage does all the sophisticated things we need to keep up with demand.”

Aside from earnings, NetApp could be feeling the aftereffects of losing several longtime key executives to retirement, including co-founder Dave Hitz and former president and vice chairman Tom Mendoza.

NetApp also avoided a potentially embarrassing legal outcome in September when a U.S. District Court for Northern California dismissed a suit against the vendor brought by seven former NetApp executives. The suit’s plaintiffs included the two previous CEOs, Georgens and Dan Warmenhoven, and former CFO Steve Gomo. They sued NetApp over allegations that NetApp unfairly terminated their company-paid lifetime retirement health benefits.

Executive news director Dave Raffo contributed to this story.

Go to Original Article
Author:

Assessing the value of personal data for class action lawsuits

When it comes to personal data exposed in a breach, assessing the value of that data for class actions lawsuits is more of an art than a science.

As interest in protecting and controlling personal data has surged among consumers lately, there have been several research reports that discuss how much a person’s data is worth on the dark web. Threat intelligence provider Flashpoint, for example, published research last month that said access to a U.S. bank account, or “bank log,” with a $10,000 balance was worth about $25. However, the price of a package of personally identifiable information (PII) or what’s known as a “fullz” is much less, according to Flashpoint; fullz for U.S. citizens that contain data such as victims’ names, Social Security numbers and birth dates range between $4 and $10.

But that’s the value of personal data to the black market. What’s the value of personal data when it comes to class action lawsuits that seek to compensate individuals who have had their data exposed or stolen? How is the value determined? If an organization has suffered a data breach, how would it figure out how much money they might be liable for?

SearchSecurity spoke with experts in legal, infosec and privacy communities to find out more about the obstacles and approaches for assessing personal data value.

The legal perspective

John Yanchunis leads the class action department of Morgan & Morgan, a law firm based in Orlando, Fla., that has handled the plaintiff end for a number of major class action data breach lawsuits, including Equifax, Yahoo and Capital One.

The 2017 Equifax breach exposed the personal information of over 147 million people, and resulted in the credit reporting company creating a $300 million settlement fund for victims (which doesn’t even account for the hundreds of millions of dollars paid to other affected parties). Yahoo, meanwhile, was hit with numerous data breaches between 2013 and 2016. In the 2013 breach, every single customer account was affected, totaling 3 billion users. Yahoo ultimately settled a class action lawsuit from customers for $117.5 million.

When it comes to determining the value of a password, W-2 form or credit card number, Yanchunis called it “an easy question but a very complex answer.”

“Is all real estate in this country priced the same?” Yanchunis asked. “The answer’s no. It’s based on location and market conditions.”

Yanchunis said dark web markets can provide some insight into the value of personal data, but there are challenges to that approach. “In large part, law enforcement now monitors all the traffic on the dark web,” he said. “Criminals know that, so what are they doing? They’re using different methods of marketing their product. Some sell it to other criminals who are going to use it, some put it on a shelf and wait until the dust settles so to speak, while others monetize it themselves.”

As a result, several methods are used to determine the value of breached personal data for plaintiffs. “You’ll see in litigation we’ve filed, there are experts who’ve monetized it through various ways in which they can evaluate the cost of passwords and other types of data,” Yanchunis said. “But again, to say what it’s worth today or a year ago, it really depends upon a number of those conditions that need to be evaluated in the moment.”

David Berger, partner at Gibbs Law Group LLP, was also involved in the Equifax class action lawsuit and has represented plaintiffs in other data breach cases. Berger said that it was possible to assess the value of personal data, and discussed a number of damage models that have been successfully asserted in litigation to establish value.

One way is to look at the value of a piece of information to the company that was breached, he said.

“In other words, how much a company can monetize basically every kind of PII or PHI, or what they are getting in different industries and what the different revenue streams are,” Berger said. “There’s been relatively more attention paid to that in data breach lawsuits. That can be one measure of damages.”

Another approach looks at the value of an individual’s personal information to that individual. Berger explained that this can be measured in multiple different ways. In litigation, economic modeling and “fairly sophisticated economic techniques” would be employed to figure out the market value of a piece of data.

Another approach to assessing personal data value is determining the cost of what individuals need to do to protect themselves from misuse of their data, such as credit monitoring services. Berger also said “benefit-of-the-bargain” rule can also help; the legal principle dictates that a party that breaches a contract must pay the victim of the breached contract an amount in damages that puts them in the same financial position they would be in if the contract was fulfilled.

For example, Berger said, say a consumer purchases health insurance and is promised reasonable data security, but if the insurance carrier was breached then “[they] got health insurance that did not include reasonable data security. We can use those same economic modeling techniques to figure out what’s the delta between what they paid for and what they actually received.”

Berger also said the California Consumer Privacy Act (CCPA), which he called “the strongest privacy law in the country,” will also help because it requires companies to be transparent about how they value user data.

“The regulation puts a piece on that and says, ‘OK, here are eight different ways that the company can measure the value of that information.’ And so we will probably soon have a bunch of situations where we can see how companies are measuring the value of data,” Berger said.

The CCPA will go into effect in the state on Jan. 1 and will apply to organizations that do business in the state and either have annual gross revenues of more than $25 million; possess personal information of 50,000 or more consumers, households or devices; or generates more than half its annual revenue from selling personal information of consumers.

Security and privacy perspectives

Some security and privacy professionals are reluctant to place a dollar value on specific types of exposed or breached personal data. While some advocates have pushed the idea of valuing consumer’s personal data as a commodities or goods to be purchased by enterprises, others, such as the Electronic Frontier Foundation (EFF) — an international digital rights group founded 29 years ago in order to promote and protect internet civil liberties — are against it.

An EFF spokesperson shared the following comment, with part of which being previously published in a July blog post titled, “Knowing the ‘Value’ of Our Data Won’t Fix Our Privacy Problems.”

“We have not discussed valuing data in the context of lawsuits, but our position on the concept of pay-for-privacy schemes is that our information should not be thought of as our property this way, to be bought and sold like a widget. Privacy is a fundamental human right. It has no price tag.”

Harlan Carvey, senior threat hunter at Digital Guardian, an endpoint security and threat intelligence vendor, agreed with Yanchunis that assessing the value of personal data depends on the circumstances of each incident.

“I don’t know that there’s any way to reach a consensus as to the value of someone’s personally identifiable data,” Carvey said via email. “There’s what the individual believes, what a security professional might believe (based on their experience), and what someone attempting to use it might believe.”

However, he said the value of traditionally low-value or high-value data might be different depending on the situation.

“Part of me says that on the one hand, certain classes of personal data should be treated like a misdemeanor, and others like a felony. Passwords can be changed, as can credit card numbers; SSNs cannot. Not easily,” Carvey said. “However, having been a boots-on-the-ground, crawling-through-the-trenches member of the incident response industry for a bit more than 20 years, I cringe when I hear or read about data that was thought to have been accessed during a breach. Even if the accounting is accurate, we never know what data someone already has in their possession. As such, what a breached company may believe is low-value data is, in reality, the last piece of the puzzle someone needed to completely steal my identity.”

Jeff Pollard, vice president and principal analyst at Forrester Research, said concerns about personal data privacy have expanded beyond consumers and security and privacy professionals to the very enterprises that use and monetize such data. There may be certain kinds of personal data that can be extremely valuable to an organization, but the fear of regulatory penalties and class action lawsuits are causing some enterprises to limit the data they collect in the first place.

“Companies may look at the data and say, ‘Sure, it’ll make our service better, but it’s not worth it’ and not collect it all,” Pollard said. “A lot of CISOs feel like they’ll be better off in the long run.”

Editor’s note: This is part one of a two-part series on class action data breach lawsuits. Stay tuned for part two.

Security news director, Rob Wright, contributed to this report.

Go to Original Article
Author:

What people are saying about the new book ‘Tools and Weapons’ | Microsoft On The Issues

“When your technology changes the world,” he writes, “you bear a responsibility to help address the world that you have helped create.” And governments, he writes, “need to move faster and start to catch up with the pace of technology.” 

In a lengthy interview, Mr. Smith talked about the lessons he had learned from Microsoft’s past battles and what he saw as the future of tech policymaking – arguing for closer cooperation between the tech sector and the government. It’s a theme echoed in the book, “Tools and Weapons: The Promise and the Peril of the Digital Age,” which he wrote with Carol Ann Browne, a member of Microsoft’s communications staff.

The New York Times, Sept. 8, 2019


In 2019, a book about tech’s present and future impact on humankind that was relentlessly upbeat would feel out of whack with reality. But Smith’s Microsoft experience allowed him to take a measured look at major issues and possible solutions, a task he says he relished.

“There are some people that are steeped in technology, but they may not be steeped in the world of politics or policy,” Smith told me in a recent conversation. “There are some people who are steeped in the world of politics and policy, but they may not be steeped in technology. And most people are not actually steeped in either. But these issues impact them. And increasingly they matter to them.”

Fast Company, Sept. 8, 2019


In ‘Tools & Weapons: The Promise and the Peril of the Digital Age,’ the longtime Microsoft executive and his co-author Carol Ann Browne tell the inside story of some of the biggest developments in tech and the world over the past decade – including Microsoft’s reaction to the Snowden revelations, its battle with Russian hackers in the lead up to the 2016 elections and its role in the ongoing debate over privacy and facial recognition technology.

The book goes behind-the-scenes at the Obama and Trump White Houses; explores the implications of the coming wave of artificial intelligence; and calls on tech giants and governments to step up and prepare for the ethical, legal and societal challenges of powerful new forms of technology yet to come.

-GeekWire, September 7, 2019


Tensions between the U.S. and China feature prominently in Smith’s new book, ‘Tools and Weapons: The Promise and the Peril of the Digital Age.’ While Huawei is its own case, Smith worries that broader and tighter strictures could soon follow. The Commerce Department is considering new restrictions on the export of emerging technologies on which Microsoft has placed big bets, including artificial intelligence and quantum computing. “You can’t be a global technology leader if you can’t bring your technology to the globe,” he says.

-Bloomberg Businessweek, Sept. 7, 2019


Tell us what you think about the book @MSFTIssues. You can buy the book here or at bookstores around the world.

Go to Original Article
Author: Microsoft News Center