Tag Archives: systems

CIOs express hope, concern for proposed interoperability rule

While CIOs applaud the efforts by federal agencies to make healthcare systems more interoperable, they also have significant concerns about patient data security.

The Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare & Medicaid Services proposed rules earlier this year that would further define information blocking, or unreasonably stopping a patient’s information from being shared, as well as outline requirements for healthcare organizations to share data such as using FHIR-based APIs so patients can download healthcare data onto mobile healthcare apps.

The proposed rules are part of an ongoing interoperability effort mandated by the 21st Century Cures Act, a healthcare bill that provides funding to modernize the U.S. healthcare system. Final versions of the proposed information blocking and interoperability rules are on track to be released in November.

“We all now have to realize we’ve got to play in the sandbox fairly and maybe we can cut some of this medical cost through interoperability,” said Martha Sullivan, CIO at Harrison Memorial Hospital in Cynthiana, Ky.

CIOs’ take on proposed interoperability rule

To Sullivan, interoperability brings the focus back to the patient — a focus she thinks has been lost over the years.

She commended ONC’s efforts to make patient access to health information easier, yet she has concerns about data stored in mobile healthcare apps. Harrison’s system is API-capable, but Sullivan said the organization will not recommend APIs to patients for liability reasons.

Healthcare CIOs at Meditech's 2019 Physician and CIO Forum shared their thoughts on proposed interoperability rules from ONC and CMS.
Physicians and CIOs at EHR vendor Meditech’s 2019 Physician and CIO Forum in Foxborough, Mass. Helen Waters, Meditech executive vice president, spoke at the event.

“The security concerns me because patient data is really important, and the privacy of that data is critical,” she said.

Harrison may not be the only organization reluctant to promote APIs to patients. A study published in the Journal of the American Medical Association of 12 U.S. health systems that used APIs for at least nine months found “little effort by healthcare systems or health information technology vendors to market this new capability to patients” and went on to say “there are not clear incentives for patients to adopt it.”

Jim Green, CIO at Boone County Hospital in Iowa, said ONC’s efforts with the interoperability rule are well-intentioned but overlook a significant pain point: physician adoption. He said more efforts should be made to create “a product that’s usable for the pace of life that a physician has.”

The product also needs to keep pace with technology, something Green described as being a “constant battle.”

There are some nuances there that make me really nervous as a CIO.
Jeannette CurrieCIO of Community Hospitals, Beth Israel Deaconess Medical Center

Interoperability is often temporary, he said. When a system gets upgraded or a new version of software is released, it can throw the system’s ability to share data with another system out of whack.

“To say at a point in time, ‘We’re interoperable with such-and-such a product,’ it’s a point in time,” he said.

Interoperability remains “critically important” for healthcare, said Jeannette Currie, CIO of Community Hospitals at Beth Israel Deaconess Medical Center in Boston. But so is patient data security. That’s one of her main concerns with ONC’s efforts and the interoperability rule, something physicians and industry experts also expressed during the comment period for the proposed rules.

“When I look at the fact that a patient can come in and say, ‘I need you to interact with my app,’ and when I look at the HIPAA requirements I’m still beholden to, there are some nuances there that make me really nervous as a CIO,” she said.

Go to Original Article
Author:

Navy sails SAP ERP systems to AWS GovCloud

The U.S. Navy has moved several SAP and other ERP systems from on premises to AWS GovCloud, a public cloud service designed to meet the regulatory and compliance requirements of U.S. government agencies.

The project entailed migrating 26 ERPs across 15 landscapes that were set up around 60,000 users across the globe. The Navy tapped SAP National Security Services Inc. (NS2) for the migration. NS2 was spun out of SAP specifically to sell SAP systems that adhere to the highly regulated conditions that U.S. government agencies operate under.

Approximately half of the systems that moved to AWS GovCloud were SAP ERP systems running on Oracle databases, according to Harish Luthra, president of NS2 secure cloud business. SAP systems were also migrated to the SAP HANA database, while non-SAP systems remain on their respective databases.

Architecture simplification and reducing TCO

The Navy wanted to move the ERP systems to take advantage of the new technologies that are more suited for cloud deployments, as well as to simplify the underlying ERP architecture and to reduce the total cost of ownership (TCO), Luthra said.

The migration enabled the Navy to reduce the data size from 80 TB to 28 TB after the migration was completed.

Harish LuthraHarish Luthra

“Part of it was done through archiving, part was disk compression, so the cost of even the data itself is reducing quite a bit,” Luthra said. “On the AWS GovCloud side, we’re using one of the largest instances — 12 terabytes — and will be moving to a 24 terabyte instance working with AWS.”

The Navy also added applications to consolidate financial systems and improve data management and analytics functionality.

“We added one application called the Universe of Transactions, based on SAP Analytics that allows the Navy to provide a consolidated financial statement between Navy ERP and their other ERPs,” Luthra said. “This is all new and didn’t exist before on-premises and was only possible to add because we now have HANA, which enables a very fast processing of analytics. It’s a giant amount of transactions that we are able to crunch and produce a consolidated ledger.”

Joe GioffreJoe Gioffre

Accelerated timeline

The project was done at an accelerated pace that had to be sped up even more when the Navy altered its requirements, according to Joe Gioffre, SAP NS2 project principal consultant. The original go-live date was scheduled for May 2020, almost two years to the day when the project began. However, when the Navy tried to move a command working capital fund onto the on-premises ERP system, it discovered the system could not handle the additional data volume and workload.

This drove the HANA cloud migration go-live date to August 2019 to meet the fiscal new year start of Oct. 1, 2019, so the fund could be included.

“We went into a re-planning effort, drew up a new milestone plan, set up Navy staffing and NS2 staffing to the new plan so that we could hit all of the dates one by one and get to August 2019,” Gioffre said. “That was a colossal effort in re-planning and re-resourcing for both us and the Navy, and then tracking it to make sure we stayed on target with each date in that plan.”

It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations.
Joshua GreenbaumPrincipal, Enterprise Applications Consulting

Governance keeps project on track

Tight governance over the project was the key to completing it in the accelerated timeframe.

“We had a very detailed project plan with a lot of moving parts and we tracked everything in that project plan. If something started to fall behind, we identified it early and created a mitigation for it,” Gioffre explained. “If you have a plan that tracks to this level of detail and you fall behind, unless you have the right level of governance, you can’t execute mitigation quickly enough.”

The consolidation of the various ERPs onto one SAP HANA system was a main goal of the initiative, and it now sets up the Navy to take advantage of next-generation technology.

“The next step is planning a move to SAP S/4HANA and gaining process improvements as we go to that system,” he said.

Proving confidence in the public cloud

It’s not a particular revelation that public cloud hyperscaler storage providers like AWS GovCloud can handle huge government workloads, but it is notable that the Department of Defense is confident in going to the cloud, according to analyst Joshua Greenbaum, principal at Enterprise Applications Consulting, a firm based in Berkeley, Calif.

“The glitches that happened with Amazon recently and [the breach of customer data from Capital One] highlight the fact that we have a long way to go across the board in perfecting the cloud model,” Greenbaum said. “But I think that SAP and its competitors have really proven that stuff does work on AWS, Azure and, to a lesser extent, Google Cloud Platform. They have really settled in as legitimate strategic platforms and are now just getting the bugs out of the system.”

Greenbaum is skeptical that the project was “easy,” but it would be quite an accomplishment if it was done relatively painlessly.

“Every time you tell me it was easy and simple and painless, I think that you’re not telling me the whole story because it’s always going to be hard,” he said. “And these are government systems, so they’re not trivial and simple stuff. But this may show us that if the will is there and the technology is there, you can do it. It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations, so it’s always going to be hard.”

Go to Original Article
Author:

Stibo Systems advances multidomain MDM system

Stibo Systems is helping to advance the market for Master Data Management (MDM) with its latest release. The Stibo Systems 9.2 release of it multidomain MDN solution provides users with new features to manage, organize and make sense of data.

Stibo Systems got its start four years ago and is a division of Denmark-based Stibo A/S, an IT and print technology multinational that was founded in 1794 as a printing company. As part of the 9.2 update, the multidomain MDM system gains enhanced machine learning capabilities to help manage data across multiple data domains.

The update, which became generally available Sept. 4, also includes a bundled integration with the Sisense BI-analytics platform for executing data analytics on the multidomain MDM.

Though MDM is not a term that is heard as often in recent years as big data, Forrester vice president and research director Gene Leganza said MDM is as relevant now as it ever was, despite the significant changes that have come about in the big data era.

“For one thing, the years of stories of firms doing innovative things with data and analytics have gotten business leaders’ attention and anyone who was unaware of the value hidden in their data assets has gotten the message that they cannot afford to leave that value unmined,” Leganza said. “For these data management laggards — and there are a lot of them — newfound enthusiasm to improve their data capabilities usually means getting started with data governance and MDM.”

Simply collecting data, though, isn’t enough. Leganza said all data analysis is a “garbage-in-garbage-out” proposition, and the reliability and trustworthiness of data has never been more important as organizations work harder to evolve into data- and insights-driven cultures. Keeping data clean and usable is where multidomain MDM plays a key role.

The years of stories of firms doing innovative things with data and analytics have gotten business leaders’ attention and anyone who was unaware of the value hidden in their data assets has gotten the message that they cannot afford to leave that value unmined.
Gene LeganzaVice president and director of research, Forrester

Looking at Stibo Systems, Leganza said that in the last few years, the vendor has significantly bolstered its general MDM capabilities and Forrester included them in the Q1 2019 Forrester Wave evaluation of MDM systems, in which Stibo was ranked a “contender.” He noted that the evaluation did not include the features in the new 9.2 release, and adding machine learning to improve data quality and governance is something Forrester had noted customers were asking for.

“This new release strengthens both their product domain dominance as well as their general MDM capabilities, which should serve them well in the marketplace,” Leganza said.

Multidomain MDM

The MDM system is a purpose-built platform for mastering data and the various domains that go into that, whether that be product domain, customer information, supplier details or vendor locations, said Doug Kimball, vice-president of global solution strategy at Stibo Systems.

Kimball said that with a multidomain MDM, Stibo Systems customers can connect data across different pieces of their domain. For example, a company could map customers to products and know where those products are by location, he said.

A sizeable amount of what goes into multidomain MDM is data governance, enabling data traceability, as well as compliance with regulations. The Stibo Systems platform brings data in from wherever a company has it, be it a database, data lake, ERP system or otherwise, Kimball said.

“We do the de-duplication, the matching of records, the address verification and all the things that make the data good and usable,” he said.

9.2 enhancements 

Among the changes in the 9.2 release, Kimball noted, is that the Cassandra database is now a database option, providing an alternative to just running an Oracle database.

For the product master data management component, Stibo Systems now has a partnership with Sisense to deliver embedded analytics. It’s now possible to create data visualizations and actionable insights that are effectively embedded in the user experience, Kimball said.

Also in the new release is an application called Smartsheet that can help to bridge the gap between multidomain MDM and a simple Excel spreadsheet.

Kimball said Stibo Systems is working on a new user experience interface that is intended to make it easier for users to navigate the multidomain MDM. The vendor is also working on MDM on the edge.

“We’re looking at the fact that you’ve got all these devices out there: smart watches, refrigerators, beacons, creating all this additional data that needs to be mastered,” Kimball said. “The data is on the edge, instead of being in traditional data stores.”

Go to Original Article
Author:

NVMe arrays power Plex MRP cloud

Cloud ERP provider Plex Systems requires a storage setup that can host hundreds of petabytes, while meeting high thresholds for performance and availability. The software-as-a-service provider is in its final year of a storage transition in which it added NVMe arrays for performance and two additional data centers for high availability.

Plex has been running a cloud for 19 years, since its 2001 inception. It started as a multi-tenant application run through a browser for customers.

“We’ve always been a cloud to manufacturers,” said Todd Weeks, group vice president of cloud operations and chief security officer for Plex. “We’ve been 100% cloud-based to our customers.”

“It looks like a public cloud to our customers, but we see it as a private cloud,” he continued. “It’s not running in Azure, AWS or Google. It’s our own managed cloud.”

The Plex private cloud runs mainly on Microsoft software, including SQL Server, and Dell EMC storage, including PowerMax all-NVMe arrays.

Scaling out with two new data centers

Weeks said Plex’s capacity from customer data grows from 15% to 25% per year. He said it has more than 200 PB of data for about 700 customers and 2,300 worldwide manufacturing sites, and it processes more than 7 billion transactions a day with 99.998% availability.

Todd Weeks, group vice president of cloud operations and chief security officer at Plex SystemsTodd Weeks

“With the growth of our company, we wanted a much better scale-out model, which we have with our two additional data centers,” he said. “Then, we said, ‘Besides just scaling out, is there more we can get out of them around availability, reliability and performance?'”

The company, based in Troy, Mich., has storage spread among data centers in Auburn Hills, Mich.; Grand Rapids, Mich.; Denver; and Dallas. The data centers are split into redundant pairs for failover, with primary storage and backup running at all four.

Weeks said Plex has used a variety of storage arrays, including ones from Dell EMC, Hitachi Vantara and NetApp. Plex is in the final year of a three-year process of migrating all its storage to Dell EMC PowerMax 8000 NVMe arrays and VxBlock converged infrastructure that includes VMAX and XtremIO all-flash arrays.

Two data centers have Dell EMC PowerMax, and the other two use Dell EMC VxBlock storage as mirrored pairs. Backup consists of Dell EMC Avamar software and Data Domain disk appliances.

“If we lose one, we fail over to the other,” Weeks said of the redundant data centers.

The performance advantage

Weeks said switching to the new storage infrastructure provided a “dramatic increase in performance,” both for primary and backup data. Backup restores have gone from hours to less than 30 minutes, and read latency has been at least three times faster, he said. Data reduction has also significantly increased, which is crucial with hundreds of petabytes of data under management.

“The big win we noticed was with PowerMax. We were expecting a 3-to-1 compression advantage from Hitachi storage, and we’ve actually seen a 9-to-1 difference,” he said. “That allows us to scale out more efficiently. We’ve bought ourselves a couple of years of extra growth capacity. We always want to stay ahead of our customers’ needs, and our customers are database-heavy. We’re also making sure we’re a couple of years ahead of where we need to be performance-wise.”

Early all-flash arrays

Plex’s introduction to EMC storage came through XtremIO all-flash arrays. While performance was the main benefit of those early all-flash systems, Weeks said, the XtremIO REST API impressed his team.

“Being able to call into [the API] made it much more configurable,” he said. “Our storage engineers said, ‘This makes my job easier.’ It’s much easier than having to script and do everything yourself. It makes it much easier to implement and deploy.”

Weeks said Plex is reluctant to move data into public clouds because of the fees incurred for data transfers. But it does store machine information gathered from the Plex industrial IoT (IIoT) SaaS product on Microsoft Azure.

“We gather plant floor machine information and tie it into our ERP,” he said. “But we don’t use public clouds for archiving or storage.”

Plex’s IT roadmap includes moving to containerized applications, mainly to support the Plex IIoT service.

“We’re looking now at how we can repackage our application,” he said. “We’re just beginning to go in the direction of microservices and containers.”

Go to Original Article
Author:

BlueKeep blues: More than 800,000 systems still unpatched

More than 800,000 Windows systems worldwide remain vulnerable to BlueKeep, according to new research.

Risk management vendor BitSight Technologies published a report that showed approximately 805,665 systems online — as of July 2 — that remain vulnerable to BlueKeep. That figure represents a decrease of about 17% from BitSight’s previous findings from May 31.

BlueKeep, which was coined by U.K.-based security researcher Kevin Beaumont, is a critical vulnerability that affects the remote desktop protocol (RDP) in older Windows OSes such as Windows 7, Windows XP and Windows Server 2008. The vulnerability could allow unauthorized parties to perform remote code execution on vulnerable systems.

BlueKeep was first disclosed and patched by Microsoft on May 14, but in the days and weeks that followed a number of alerts from Microsoft, as well as the National Security Agency and Department of Homeland Security, warned Windows users that the flaw was “wormable” and urged them to patch immediately. While no BlueKeep attacks have been detected in the wild, several cybersecurity vendors and researchers have demonstrated — but not released — proof-of-concept exploits for the vulnerability.

Two weeks after Microsoft patched BlueKeep, Robert Graham, owner of Errata Security in Portland, Ore., reported that he found “roughly 950,000” vulnerable systems on the public internet using a customized scanning tool. BitSight used Graham’s tool in its own scanning platform and found 972,829 vulnerable Windows systems as of May 31.

The company’s latest research showed that since its initial scans, 167,000 fewer vulnerable systems were found online. Of the total number, BitSight’s report said around 92,000 have “since been observed to be patched;” the remaining systems could have turned off RDP or are frequently changing their IP addresses.

Dan Dahlberg, head of security research at BitSight and author of the report, said the progress is a positive sign but that more work is obviously needed to address the remaining vulnerable systems. “It’s good that we observed some amount of progress rather than having the number remain relatively consistent over that time period,” he said.

The challenge, Dahlberg said, is that organizations that typically use the older Windows OSes “are less likely to be patching this on a much more urgent basis because they probably don’t have the sophistication and technology in terms of patch management or software controls.”

BitSight performed periodic internet scans for BlueKeep-vulnerable systems, but Dahlberg said it’s difficult to associate the activity with discrete points in time regarding the alerts and warnings. “That doesn’t necessarily mean those announcements didn’t have any influence,” he said. “I think they had a significant amount of influence in terms of motivating at least some companies [to patch].”

BlueKeep patching trends

According to the BitSight report, several countries “demonstrated notable reductions” in the number of systems exposed to BlueKeep. For example, China reduced the number of vulnerable systems by 109,670 (a nearly 24% decrease from BitSight’s previous report), while the U.S. saw its number of vulnerable systems drop by 26,787 or approximately 20.3%.

BitSight also broke down patching trends by industry vertical. According to the report, the industries that saw the biggest reductions in vulnerable systems since May 31 were legal (32.9%), nonprofit/NGO (27.1%) and aerospace/defense (24.1%). The industries that saw the smallest drops in vulnerable systems were consumer goods (5.3%), utilities (9.5%), and technology (9.5%).

In addition, BitSight measured the overall exposure of each industry to BlueKeep going forward. Legal, insurance and finance were the least exposed to the vulnerability, while telecommunications and education were the most exposed, followed by technology, utilities and government/politics.

Go to Original Article
Author:

GPU-buffed servers advance Cisco’s AI agenda

Cisco Systems is the latest hardware vendor to offer gear tuned for AI and machine learning-based workloads.

Competition to support AI and machine workloads continues to heat up. Earlier this year archrivals Dell Technologies Inc., Hewlett Packard Enterprise and IBM rolled out servers designed to optimize performance of AI and machine learning workloads. Many smaller vendors are chasing this market as well.

“This is going to be a highly competitive field going forward with everyone having their own solution,” said Jean Bozman, vice president and principal analyst at Hurwitz & Associates. “IT organizations will have to figure out, with the help of third-party organizations, how to best take advantage of these new technologies.”

Cisco AI plan taps Nvidia GPUs

The Cisco UCS C480 ML M5 rack server, the company’s first tuned to run AI workloads, contains Nvidia Tesla V100 Tensor Core GPUs and NVLink to boost performance, and works with neural networks and large data sets to train computers to carry out complex tasks, according to the company. It works with Cisco Intersight, introduced last year, which allows IT professionals to automate policies and operations across their infrastructure from the cloud.

This Cisco AI server will ship sometime during this year’s fourth quarter. Cisco Services will offer technical support for a range of AI and machine learning capabilities.

Cisco intends to target several different industries with the new system. Financial services companies can use it to detect fraud and algorithmic trading, while healthcare companies can enlist it to deliver insights and diagnostics, improve medical image classification and speed drug discovery and research.

Server hardware makers place bets on AI

The market for AI and machine learning, particularly the former, represents a rich opportunity for systems vendors over the next year or two. Only 4% of CIOs said they have implemented AI projects, according to a Gartner study earlier this year. However, some 46% have blueprints in place to implement such projects, and many of them have kicked off pilot programs.

[AI and machine learning-based servers are] going to be a highly competitive field going forward with everyone having their own solution.
Jean Bozmanvice president and principal analyst, Hurwitz & Associates

AI and machine learning offers IT shops more efficient ways to address complex issues, but will significantly affect their underlying infrastructure and processes. Larger IT shops must heavily invest in training and the education of existing employees in how to use the technologies, the Gartner report stated. They also must upgrade existing infrastructure before they deploy production-ready AI and machine learning workloads. Enterprises will need to retool infrastructure to find ways to more efficiently handle data.

“All vendors will have the same story about data being your most valuable asset and how they can handle it efficiently,” Bozman said. “But to get at [the data] you first have to break down the data silos, label the data to get at it efficiently, and add data protection.”

Only after this prep work can IT shops take full advantage of AI-powered hardware-software tools.

“No matter how easy some of these vendors say it is to implement their integrated solutions, IT [shops] have more than a little homework to do to make it all work,” one industry analyst said. “Then you are ready to get the best results from any AI-based data analytics.”

Mist automates WLAN monitoring with new AI features

Mist Systems announced this week that its Marvis virtual network assistant now understands how to respond to hundreds of inquiries related to wireless LAN performance. And, in some cases, it can detect anomalies in those networks before they cause problems for end users.

IT administrators can ask Marvis questions about the performance of wireless networks — and the devices connected to it — using natural language commands, such as, “What’s wrong with John’s laptop?” The vendor said the technology helps customers identify client-level problems, rather than just network-wide trends.

Marvis could only handle roughly a dozen basic questions at launch in February. But Mist’s machine learning platform has used data from customers that have started using the product to improve Marvis’ natural language processing (NLP) skills for WLAN monitoring. Marvis can now field hundreds of queries, with less specificity required in asking each question.

Mist also announced an anomaly detection feature for Marvis that uses deep learning to determine when a wireless network is starting to behave abnormally, potentially flagging issues before they happen. Using the product’s APIs, IT departments can integrate Marvis with their help desk software to set up automatic alerts.

Mist has a robust platform for network management, and the advancements announced this week represent “solid steps forward for the company and the industry,” said Brandon Butler, analyst at IDC.

Cisco and Aruba Networks, a subsidiary of Hewlett Packard Enterprise, have also been investing in new technologies for automated WLAN monitoring and management, Butler said.

“Mist has taken a unique approach in the market with its focusing on NLP capabilities to provide users an intuitive way of interfacing with the management platform,” Butler said. “It is one of many companies … that are building up their anomaly detection and auto-remediation capabilities using machine learning capabilities.”

Applying AI to radio resource management

The original promise of radio resource management (RRM), which has been around for 15 years, was the service would detect noise and interference in wireless networks and adjust access points and channels accordingly, said Jeff Aaron, vice president of marketing at Mist, based in Cupertino, Calif.

“The problem is it’s never really worked that way,” Aaron said. “RRM has never been real-time; it’s usually done at night, because it doesn’t really have the level of data you need to make the decision.”

Now, Mist has revamped its RRM service using AI, so it can monitor the coverage, capacity, throughput and performance of Wi-Fi networks on a per-user basis. The service makes automatic changes and quantifies what impact — positive or negative — those changes have on end users.

Mist has RRM in its flagship product for WLAN monitoring and management, Wi-Fi Assurance.

Service-level expectations for WAN performance

Mist will now let customers establish and enforce service-level expectations (SLEs) for WAN performance. The agreements will help Mist customers track the impact of latency, jitter and packet loss on end users.

The release of SLEs for the WAN comes as Mist pursues partnerships with Juniper and VMware to reduce friction between the performance and user experience of the WLAN and the WAN.

Mist also lets customers set service levels for Wi-Fi performance based on metrics that include capacity, coverage, throughput, latency, access point uptime and roaming.

Dell EMC PowerVault ME4 launched for entry-level SAN

Dell EMC this week added a new line of entry-level storage systems, extending its PowerVault line to handle SAN and direct-attached storage.

The Dell EMC PowerVault ME4 line consists of three flash-based models: the 2U ME4012 and ME4024 systems and the dense 5U ME4084 expansion enclosure. 

The PowerVault block arrays can serve as direct-attached storage with Dell EMC PowerEdge storage servers, or they can extend SAN storage to enterprise remote branch offices. The latest PowerVault scales to 336 SAS drives and 4 TB of raw storage with ME expansion shelves.

The new PowerVault block systems provide unified file storage with Dell EMC PowerVault NX Series Windows-based NAS devices.

PowerVault ME4 models start at $13,000, and Dell EMC’s auto-tiering, disaster recovery, RAID support, replication, snapshots, thin provisioning and volume copy software are standard features. Dell EMC claims an HTML5 graphical user interface enables setup within 15 minutes.

PowerVault for large and small customers

Dell’s $60 billion-plus acquisition of EMC in 2016 created wide industry speculation that the combined Dell EMC would need to winnow its overlapping midrange storage portfolio.

Last week, Dell’s vice chairman of products and operations, Jeff Clarke, said the midrange Unity and SC Series platforms would converge in 2019.  But the vendor will still have a variety of storage array platforms. Dell EMC PowerMax — formerly VMAX — is the vendor’s flagship all-flash SAN. Dell EMC also sells XtremIO all-flash and Isilon clusterd NAS systems.

EMC was the external storage market share leader before the Dell acquisition. Post-merger Dell generated more than double the revenue of any other external storage vendor in the second quarter of 2018, according to IDC’s Worldwide Quarterly Enterprise Storage Systems Tracker numbers released last week.

IDC credited Dell with $1.9 billion in storage revenue in the quarter — more than double the $830 million for No. 2 NetApp. Dell had 29.2% of the market and grew 18.4% year over year for the quarter, compared with the overall industry growth of 14.4%, according to IDC.

Dell EMC PowerVault ME4084 5U expansion
Dell EMC’s extended PowerVault family includes the ME4084 5U expansion enclosure.

Dell initially launched PowerVault for archiving and backup, but repositioned it as “cheap and deep” block storage in back of the EqualLogic-based SC SANs.

Sean Kinney, a senior director of product marketing for Dell EMC midrange storage, said PowerVault ME doubles back-end performance with 12 Gbps SAS and is capable of handling 320,000 IOPS.

“We’ve talked over the past few months about how we’re going to simplify our [midrange] portfolio and align it under a couple of key platforms. We have the PowerMax at the high end. This is the next phase in that journey,” Kinney said.

The new PowerEdge arrays take self-encrypting nearline SAS disks or 3.5-inch SAS-connected SSDs, and they can be combined behind a single ME4 RAID controller. The configuration gives customers the option to configure PowerVault as all-flash or hybrid storage. The base ME4012 and ME4024 2U units come with dual controllers, with 8 GB per controller, and four ports for 10 GB iSCSI, 12 Gbps SAS and 16 Gbps Fibre Channel connectivity.

Customers could add a 5U ME484 expansion enclosure behind any ME4 base unit to scale Dell EMC PowerVault to 336 nearline disks or SSDs. Dell EMC claimed it has sold more than 400,000 units of PowerVault generations.

Enterprises use PowerVault arrays “by the hundreds” at remote branch sites, while smaller organizations make up a big share of the installed base, said Bob Fine, a director of marketing for Dell EMC midrange storage.

“If you only have one or two IT generalists, PowerVault could be your entire data center,” Fine said.

How bias in AI happens — and what IT pros can do about it

Artificial intelligence systems are getting better and smarter, but are they ready to make impartial predictions, recommendations or decisions for us? Not quite, Gartner research vice president Darin Stewart said at the 2018 Gartner Catalyst event in San Diego.

Just like in our society, bias in AI is ubiquitous, Stewart said. These AI biases tend to arise from the priorities that the developer and the designer set when developing the algorithm and training the model.

Direct bias in AI arises when the model makes predictions, recommendations and decisions based on sensitive or prohibited attributes — aspects like race, gender, sexual orientation and religion. Fortunately, with the right tools and processes in place, direct bias can be “pretty easy to detect and prevent,” Stewart said.

According to Stewart, preventing bias requires situational testing on the inputs, turning off each of the sensitive attributes as you’re training the model and then measuring the impact on the output. The problem is that one of machine learning’s fundamental characteristics is to compensate for missing data. Therefore, nonsensitive attributes that are strongly correlated with the sensitive attributes are going to be weighted more strongly to compensate. This introduces — or at least reinforces — indirect bias in AI systems.

AI bias in criminal sentencing

A distressing real-life example of this indirect bias reinforcement is in criminal justice, as an AI sentencing solution called Compas is currently being used in several U.S. states, Stewart said. The system takes a profile of a defendant and generates a risk score based on how likely a defendant is to reoffend and be considered a risk to the community. Judges then take these risk scores into account when sentencing.

A study looked at several thousand different verdicts associated with the AI system and found that African-Americans were 77% more likely than white defendants to be incorrectly classified as high risk. Conversely, white defendants were 40% more likely to be misclassified as low risk, then go on to reoffend.

Even though it is not part of the underlying data set, Compas’ predictions are highly correlated with race because more weight is given to related nonsensitive attributes like geography and education level.

If you omit all of the sensitive attributes, yes, you’re eliminating direct bias, but you’re reintroducing and reinforcing indirect bias.
Darin Stewartresearch vice president, Gartner

“You’re kind of in a Catch 22,” Stewart said. “If you omit all of the sensitive attributes, yes, you’re eliminating direct bias, but you’re reintroducing and reinforcing indirect bias. And if you have separate classifiers for each of the sensitive attributes, then you’re reintroducing direct bias.”

One of the best ways IT pros can combat this, Stewart said, is to determine at the outset what the threshold of acceptable differentiation should be and then measure each value against it. If it exceeds your threshold, it’s excluded from the model. If it’s under the limit, it’s included in the model.

“You should use those thresholds, those measures of fairness, as constraints on the training process itself,” Stewart said.

If you are creating an AI system that is going to “materially impact someone’s life,” you also need to have a human in the loop who understands why decisions are being made, he added.

Context is key

Stewart also warned IT practitioners to be wary when training an AI system on historical records. AI systems are optimized to match previous decisions — and previous biases. He points to the racist practice of “redlining” in Portland, Ore., — which was legal in the city from 1856 until 1990 — that prevented people of color from purchasing homes in certain neighborhoods for decades. AI systems used in real estate could potentially reinstate this practice, Stewart said.

“Even though the laws change and those bias practices are no longer allowed, there’s 144 years of precedent data and a lot of financial activity-based management solutions are trained on those historical records,” Stewart said.

To avoid perpetuating that type of bias in AI, Stewart said it’s critical that IT pros pay close attention to the context surrounding their training data.

“This goes beyond basic data hygiene,” Stewart said. “You’re not just looking for corrupted and duplicate values, you’re looking for patterns. You’re looking for context.”

If IT pros are using unstructured data, text analytics is their best friend, Stewart said. It can help them uncover patterns they wouldn’t find otherwise. Ideally, IT pros will also have a master list of “don’t-go-there” items they check against when searching for bias.

“Develop a list of suspect results so that if something unusual started popping out of the model, it would be a red flag that needs further investigation,” Stewart said.

Intentionally inserting bias in AI

Is there ever a case where IT pros would want to inject bias into an AI system? With all the talk about the dangers of perpetuating AI bias, it may seem odd to even consider the possibility. But if one is injecting that bias to correct a past inequity, Stewart’s advice was to go for it.

“That is perfectly acceptable if it is a legitimate and ethical target,” he said. “There are legitimate cases where a big disparity between two groups is the correct outcome, but if you see something that isn’t right or that isn’t reflected in the natural process, you can inject bias into the algorithm and optimize it to maximize [a certain] outcome. “

Inserting bias in AI systems could, for instance, be used to correct gender disparities in certain industries, he said. The only proviso he would put on the practice of purposefully inserting bias into an AI algorithm is to document it and be transparent about what you’re doing.

“That way, people know what’s going on inside the algorithm and if suddenly things shift to the other extreme, you know how to dial it back,” Stewart said.