Tag Archives: data

Colorado builds API integration platform to share data, speed services

Integrating data across corporate departments can be challenging. There invariably are technical and cultural hurdles that must be cleared.

The state of Colorado’s ambitious plan to integrate data across dozens of agencies by using APIs had to contend with another issue: It was being launched under intense media scrutiny.

“We typically think of IT as a back-office function,” said Jon Gottsegen, chief data officer for the state of Colorado. Not so in Colorado. A deeply troubled benefits eligibility system — more than a decade in development and charged with making improper payments — had put IT in the limelight — and not in a good way, he said. The newspaper term above the fold became part of his vocabulary, Gottsegen grimly joked.

Work on the state’s new API integration platform began in 2017 with a major transformation of the infamous benefits system. Partnering with Deloitte, IT rewrote the system’s code, migrated services to Salesforce and AWS, and used APIs to drive integration into various databases, Gottsegen said. This helped reduce the amount of time to determine eligibility for benefits from days to minutes — a major boon for state employees.

Today, the API integration platform — while still a work in progress — has dramatically sped up a number of state processes and is paving the way for better data sharing down the road, Gottsegen said.

Speaking at the recent MuleSoft Connect conference in San Francisco, Gottsegen shared the objectives of Colorado’s API integration strategy, the major challenges his team has encountered and the lessons learned.

People, of course, should come first in projects of this scope: Delivering better services to the people of Colorado was aim No. 1. of his team’s API integration platform, Gottsegen said. Security, data governance and corporate culture also demand attention.

Becoming the ‘Amazon of state services’

The task before Gottsegen and his group was to create a process for rolling out APIs that work seamlessly across dozens of different agencies. “Ideally, we want to be the Amazon of state services,” he said of IT’s grand mission.

Developers had to learn how to connect systems to databases that were regulated in different ways. Gottsegen’s team spent a lot of time putting together a comprehensive platform, which was important for integration, he said. It was also important to deliver the APIs in a way that they could be easily consumed by the various state agencies. One goal was to ensure that new APIs were reusable.

Part of the work also involved looking at how services relate to each other. For example, if someone is getting Medicaid, there is a good chance they are also eligible for house services. The API platform had to support the data integration that helps automate these kinds of cross-agency processes, he said.

Getting the API program off the ground was not just about solving the technical problems. When communicating with technology personnel across agencies, Gottsegen said it was important to convey that the API integration platform is about better serving the residents of Colorado.

Learning from contractors

IT did not go it alone. Gottsegen said the state worked with a variety of contractors to speed up its API development process. This included working with MuleSoft to roll out a more expansive API management tier. IT also hired some contractors with integration expertise to kick-start the project. But he added that it was important to ensure the knowledge involved in building the APIs was retained after the contract ends.

“We want our teams to sit next to those contractors to ensure the knowledge of those contractors gets internalized. There have been many cases where state workers did not know how to maintain something after the contractor has left,” he said.

Good metrics, communication critical to API integration success

Before Gottsegen’s team launched a formal API integration program, no one was tracking how long it took agencies to set up a working data integration process. Anecdotal examples of problems would emerge, including stories of agencies that spent over a year negotiating how to set up a data exchange.

The team now has formal metrics to track time to implementation, but the lack of past metrics precludes precise measurements on how the new API integration platform speeds up data exchange compared to before.

In any case, expediting the data exchange process is not just about having a more all-encompassing integration tier, Gottsegen stressed. Better communication between departments is also needed.

As it rolls out the API integration platform, IT is working with the agencies to identify any compliance issues and find a framework to address them.

Centralizing security

Each agency oversees its own data collection and determines where it can be used, Gottsegen said. There are also various privacy regulations to conform to, including HIPAA and IRS 1075.

“One of the reasons we pursued MuleSoft was so we could demonstrate auditable and consistent security and governance of the data,” he said.

Navigating the distinctions between privacy and security is a big challenge, he said. Each agency is responsible for ensuring restrictions on how its data is used; it is not a task assigned to a centralized government group because the agency is the expert on privacy regulations. At the same time, Gottsegen’s group can provide better security into API integration mechanisms used to exchange data between agencies.

To provide API integration security, Gottsegen created a DevOps toolchain run by a statewide center for enablement. This included a set of vetted tools and practices that agencies could adopt to speed the development of new integrations (dev) and the process of pushing them into production (ops) safely.

Gottsegen said the group is developing practices to build capabilities that can be adopted across the state, but progress is uneven. He said the group has seen mixed results in getting buy-in from agencies.

Improving data quality across agencies

Gottsegen’s team has also launched a joint agency interoperability project for the integration of over 45 different data systems across the state. The aim is to build a sturdy data governance process across groups. The first question being addressed is data quality, in particular to ensure a consistent digital ID of citizens. “To be honest, I’m not sure we have a quality measure across the state,” Gottsegen said.

Gottsegen believes that data quality is not about being good or bad, but about fitness for use. It’s not easy articulating what particular data set is appropriate across agencies.

“Data quality should be a partnership between agencies and IT,” he said. His team often gets requests to integrate data across agencies. The challenge is how to provide the tools to do that. The agencies need to be able to describe the idiosyncrasies of how they collect data in order to come up with a standard. Down the road, Gottsegen hopes machine learning will help improve this process.

Building trust with state IT leaders

A lot of state initiatives are driven from the top down. But, if workers don’t like a directive, they can often wait things out until a new government is elected. Gottsegen found that building trust among IT leaders across state agencies was key in growing the API program. “Trust is important — not just in technology changes, but in data sharing as well,” he said.

And face-to face connections matter. In launching its API integration platform, he said, it was important for IT leaders across organizations to learn each other’s names and to meet in person, even when phone calls or video conferences might be more convenient.

As for the future, Gottsegen has a vision that all data sharing will eventually happen through API integrations. But getting there is a long process. “That might be 10 years out — if it happens. We keep that goal in mind while working with our collaborators to build things out.”

Go to Original Article
Author:

Merger and acquisition activity is altering the BI landscape

From Qlik’s acquisition of Podium Data in July 2018 through Salesforce’s purchase of Tableau Software in June 2019, the last year in BI has been characterized by a wave of consolidation.

Merger and acquisition activity is altering the BI landscape

Capped by Salesforce’s $15.7 billion acquisition of Tableau Software on June 10, 2019, and Google’s $2.6 billion purchase of Looker just four days earlier, the BI market over the last year has been marked by a wave of merger and acquisition activity.

Qlik kicked off the surge with its acquisition of Podium Data in July 2018, and, subsequently, made two more purchases.

“To survive, you’re going to have to reach some kind of scale,” said Rick Sherman, founder and managing partner of Athena IT Solutions, in a SearchBusinessAnalytics story in July 2019. “Small vendors are going to be bought or merge with more focused niche companies to build a more complete product.”

It was a little more than a decade ago that a similar wave of merger and activity reshaped the BI landscape, highlighted by IBM buying Cognos Analytics and SAP acquiring Business Objects.

After the flurry of deals in the spring ending with the premium Salesforce paid for Tableau, the pace of mergers and acquisition activity has slowed since the start of the summer, but more could be coming soon as more vendors with a specialized purpose seek partners with complementary capabilities in an attempt to keep pace with competitors that have already filled out their analytics stack.

Go to Original Article
Author:

Data ethics issues create minefields for analytics teams

GRANTS PASS, Ore. — AI technologies and other advanced analytics tools make it easier for data analysts to uncover potentially valuable information on customers, patients and other people. But, too often, consultant Donald Farmer said, organizations don’t ask themselves a basic ethical question before launching an analytics project: Should we?

In the age of GDPR and like-minded privacy laws, though, ignoring data ethics isn’t a good business practice for companies, Farmer warned in a roundtable discussion he led at the 2019 Pacific Northwest BI & Analytics Summit. IT and analytics teams need to be guided by a framework of ethics rules and motivated by management to put those rules into practice, he said.

Otherwise, a company runs the risk of crossing the line in mining and using personal data — and, typically, not as the result of a nefarious plan to do so, according to Farmer, principal of analytics consultancy TreeHive Strategy in Woodinville, Wash. “It’s not that most people are devious — they’re just led blindly into things,” he said, adding that analytics applications often have “unforeseen consequences.”

For example, he noted that smart TVs connected to home networks can monitor whether people watch the ads in shows they’ve recorded and then go to an advertiser’s website. But acting on that information for marketing purposes might strike some prospective customers as creepy, he said.

Shawn Rogers, senior director of analytic strategy and communications-related functions at vendor Tibco Software Inc., pointed to a trial program that retailer Nordstrom launched in 2012 to track the movements of shoppers in its stores via the Wi-Fi signals from their cell phones. Customers complained about the practice after Nordstrom disclosed what it was doing, and the company stopped the tracking in 2013.

“I think transparency, permission and context are important in this area,” Rogers said during the session on data ethics at the summit, an annual event that brings together a small group of consultants and vendor executives to discuss BI, analytics and data management trends.

AI algorithms add new ethical questions

Being transparent about the use of analytics data is further complicated now by the growing adoption of AI tools and machine learning algorithms, Farmer and other participants said. Increasingly, companies are augmenting — or replacing — human involvement in the analytics process with “algorithmic engagement,” as Farmer put it. But automated algorithms are often a black box to users.

Mike Ferguson, managing director of U.K.-based consulting firm Intelligent Business Strategies Ltd., said the legal department at a financial services company he works with killed a project aimed at automating the loan approval process because the data scientists who developed the deep learning models to do the analytics couldn’t fully explain how the models worked.

We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.
Mike FergusonManaging director, Intelligent Business Strategies Ltd.

And that isn’t an isolated incident in Ferguson’s experience. “There’s a loggerheads battle going on now in organizations between the legal and data science teams,” he said, adding that the specter of hefty fines for GDPR violations is spurring corporate lawyers to vet analytics applications more closely. As a result, data scientists are focusing more on explainable AI to try to justify the use of algorithms, he said.

The increased vetting is driven more by legal concerns than data ethics issues per se, Ferguson said in an interview after the session. But he thinks that the two are intertwined and that the ability of analytics teams to get unfettered access to data sets is increasingly in question for both legal and ethical reasons.

“It’s pretty clear that legal is throwing their weight around on data governance,” he said. “We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.”

Jill Dyché, an independent consultant who’s based in Los Angeles, said she expects explainable AI to become “less of an option and more of a mandate” in organizations over the next 12 months.

Code of ethics not enough on data analytics

Staying on the right side of the data ethics line takes more than publishing a corporate code of ethics for employees to follow, Farmer said. He cited Enron’s 64-page ethics code, which didn’t stop the energy company from engaging in the infamous accounting fraud scheme that led to bankruptcy and the sale of its assets. Similarly, he sees such codes having little effect in preventing ethical missteps on analytics.

“Just having a code of ethics does absolutely nothing,” Farmer said. “It might even get in the way of good ethical practices, because people just point to it [and say], ‘We’ve got that covered.'”

Instead, he recommended that IT and analytics managers take a rules-based approach to data ethics that can be applied to all three phases of analytics projects: the upfront research process, design and development of analytics applications, and deployment and use of the applications.

Go to Original Article
Author:

Naveego launches tool for analyzing data quality and health

Naveego has launched Accelerator, a tool that analyzes data accuracy and checks the health of multiple data sources.

Naveego Accelerator checks data health by auto-profiling and doing a cross-system comparison. It calculates the percentage of data with consistency errors that would affect a business’s operations and profitability by doing a cross-system comparison.

The tool then delivers analysts results and data health metrics within minutes, according to the vendor. Users can also have Accelerator set data quality checks to investigate issues further.

Data cleansing has long been an important part of data management for businesses. The process fixes or removes data that is wrong, incomplete, formatted incorrectly or duplicated. Data-heavy industries, such as banking, transportation or retail, can use data cleansing to examine data for issues by using rules, algorithms and lookup tables.

Naveego’s flagship product is the Complete Data Accuracy Platform, which aims to prevent issues stemming from inaccurate data. It is a hybrid, multi-cloud platform that manages and detects data accuracy issues.

Naveego has also expanded its Partner Success Program, partnering with Frontblade Systems, H2 Integrated Solutions, Mondelio and Narwal. The Partner Success Program provides a support package for partners that includes sales personnel, technical training and expertise, and marketing and promotional support.

As an emerging vendor in the data quality software market, Naveego must compete with market giants such as Informatica and IBM.

Informatica offers a portfolio of products designed for data quality assurance, including Axon Data Governance, Informatica Data Quality, Cloud Data Quality, Big Data Quality, Enterprise Data Catalog and Data as a Service. Informatica Data Quality ensures data is clean and ready to use, and it supports Microsoft Azure and Amazon Web Services.

IBM offers a handful of data quality products, as well, including InfoSphere Information Server for Data Quality, InfoSphere QualityStage, BigQuality and InfoSphere Information Analyzer. These products work to cleanse data, monitor data quality and provide data profiling and analysis to evaluate data for consistency and quality.

Go to Original Article
Author:

Adobe Experience Platform adds features for data scientists

After almost a year in beta, Adobe has introduced Query Service and Data Science Workspace to the Adobe Experience Platform to enable brands to deliver tailored digital experiences to their customers, with real-time data analytics and understanding of customer behavior.

Powered by Adobe Sensei, the vendor’s AI and machine learning technology, Query Service and Data Science Workspace intend to automate tedious, manual processes and enable real-time data personalization for large organizations.

The Adobe Experience Platform — previously the Adobe Cloud Platform — is an open platform for customer experience management that synthesizes and breaks down silos for customer data in one unified customer profile.

According to Adobe, the volume of data organizations must manage has exploded. IDC predicted the Global DataSphere will grow from 33 zettabytes in 2018 to 175 zettabytes by 2025. And while more data is better, it makes it difficult for businesses and analysts to sort, digest and analyze all of it to find answers. Query Service intends to simplify this process, according to the vendor.

Query Service enables analysts and data scientists to perform queries across all data sets in the platform instead of manually combing through siloed data sets to find answers for data-related questions. Query Service supports cross-channel and cross-platform queries, including behavioral, point-of-sale and customer relationship management data. Query Service enables users to do the following:

  • run queries manually with interactive jobs or automatically with batch jobs;
  • subgroup records based on time and generate session numbers and page numbers;
  • use tools that support complex joins, nested queries, window functions and time-partitioned queries;
  • break down data to evaluate key customer events; and
  • view and understand how customers flow across all channels.

While Query Service simplifies the data identification process, Data Science Workspace helps to digest data and enables data scientists to draw insights and take action. Using Adobe Sensei’s AI technology, Data Science Workspace automates repetitive tasks and understands and predicts customer data to provide real-time intelligence.

Also within Data Science Workspace, users can take advantage of tools to develop, train and tune machine learning models to solve business challenges, such as calculating customer predisposition to buy certain products. Data scientists can also develop custom models to pull particular insights and predictions to personalize customer experiences across all touchpoints.

Additional capabilities of Data Science Workstation enable users to perform the following tasks:

  • explore all data stored in Adobe Experience Platform, as well as deep learning libraries like Spark ML and TensorFlow;
  • use prebuilt or custom machine learning recipes for common business needs;
  • experiment with recipes to create and train tracked unlimited instances;
  • publish intelligent services recipes without IT to Adobe I/O; and
  • continuously evaluate intelligent service accuracy and retrain recipes as needed.

Adobe data analytics features Query Service and Data Science Workspace were first introduced as part of the Adobe Experience Platform in beta in September 2018. Adobe intends these tools to improve how data scientists handle data on the Adobe Experience Platform and create meaningful models off of which developers can work. 

Go to Original Article
Author:

Get to know data storage containers and their terminology

Data storage containers have become a popular way to create and package applications for better portability and simplicity. Seen by some analysts as the technology to unseat virtual machines, containers have steadily gained more attention as of late, from customers and vendors alike.

Why choose containers and containerization over the alternatives? Containers work on bare-metal systems, cloud instances and VMs, and across Linux and select Windows and Mac OSes. Containers typically use fewer resources than VMs and can bind together application libraries and dependencies into one convenient, deployable unit.

Below, you’ll find key terms about containers, from technical details to specific products on the market. If you’re looking to invest in containerization, you’ll need to know these terms and concepts.

Getting technical

Containerization. With its roots in partitioning, containerization is an efficient data storage strategy that virtually isolates applications, enabling multiple containers to run on one machine but share the same OS. Containers run independent processes in a shared user space and are capable of running on different environments, which makes them a flexible alternative to virtual machines.

The benefits of containerization include reduced overhead on hardware and portability, while concerns include the security of data stored on containers. With all of the containers running under one OS, if one container is vulnerable, the others are as well.

Container management software. As the name indicates, container management software is used to simplify, organize and manage containers. Container management software automates container creation, destruction, deployment and scaling and is particularly helpful in situations with large numbers of containers on one OS. However, the orchestration aspect of management software is complex and setup can be difficult.

Products in this area include Kubernetes, an open source container orchestration software; Apache Mesos, an open source project that manages compute clusters; and Docker Swarm, a container cluster management tool.

Persistent storage. In order to be persistent, a storage device must retain data after being shut off. While persistence is essentially a given when it comes to modern storage, the rise of containerization has brought persistent storage back to the forefront.

Containers did not always support persistent storage, which meant that data created with a containerized app would disappear when the container was destroyed. Luckily, storage vendors have made enough advances in container technology to solve this issue and retain data created on containers.

Stateful app. A stateful app saves client data from the activities of one session for use in the next session. Most applications and OSes are stateful, but because stateful apps didn’t scale well in early cloud architectures, developers began to build more stateless apps.

With a stateless app, each session is carried out as if it was the first time, and responses aren’t dependent upon data from a previous session. Stateless apps are better suited to cloud computing, in that they can be more easily redeployed in the event of a failure and scaled out to accommodate changes.

However, containerization allows files to be pulled into the container during startup and persist somewhere else when containers stop and start. This negates the issue of stateful apps becoming unstable when introduced to a stateless cloud environment.

Container vendors and products

While there is one vendor undoubtedly ahead of the pack when it comes to modern data storage containers, the field has opened up to include some big names. Below, we cover just a few of the vendors and products in the container space.

Docker. Probably the most synonymous with data storage containers, Docker is even credited with bringing about the container renaissance in the IT space. Docker’s platform is open source, which enables users to register and share containers over various hosts in both private and public environments. In recent years, Docker made containers accessible and offers various editions of containerization technology.

When you refer to Docker, you likely mean either the company itself, Docker Inc., or the Docker Engine. Initially developed for Linux systems, the Docker Engine had version updates extended to operate natively on both Windows and Apple OSes. The Docker Engine supports tasks and workflows involved in building, shipping and running container-based applications.

Container Linux. Originally referred to as CoreOS Linux, Container Linux by CoreOS is an open source OS that deploys and manages the applications within containers. Container Linux is based on the Linux kernel and is designed for massive scale and minimal overhead. Although, Container Linux is open source, CoreOS sells support for the OS. Acquired by Red Hat in 2018, CoreOS develops open source tools and components.

Azure Container Instances (ACI). With ACI, developers can deploy data storage containers on the Microsoft Azure cloud. Organizations can spin up a new container via the Azure portal or command-line interface, and Microsoft automatically provisions and scales the underlying compute resources. ACI also supports standard Docker images and Linux and Windows containers.

Microsoft Windows containers. Windows containers are abstracted and portable operating environments supported by the Microsoft Windows Server 2016 OS. They can be managed with Docker and PowerShell and support established Windows technologies. Along with Windows Containers, Windows Server 2016 also supports Hyper-V containers.

VMware vSphere Integrated Containers (VIC). While VIC can refer to individual container instances, it is also a platform that deploys and manages containers within VMs from within VMware’s vSphere VM management software. Previewed under the name Project Bonneville, VMware’s play on containers comes with the virtual container host, which represents tools and hardware resources that create and control container services.

Go to Original Article
Author:

Discover how Microsoft 365 can help health providers adapt in an era of patient data protection and sharing

For years, patient data management meant one thing—secure the data. Now, healthcare leaders must protect and openly share the data with patients and with other healthcare organizations to support quality of care, patient safety, and cost reduction. As data flows more freely, following the patient, there’s less risk of redundant testing that increases cost and waste. Legacy infrastructure and cybersecurity concerns stand on the critical path to greater interoperability and patient record portability. Learn how Microsoft 365 can help.

Impact of regulatory changes and market forces

Regulatory changes are a big driver for this shift. Through regulations like the 21st Century Cures Act in the United States, healthcare organizations are required to improve their capabilities to protect and share patient data. The General Data Protection Regulation (GDPR) in the European Union expands the rights of data subjects over their data. Failing to share patient data in an effective, timely, and secure manner can result in significant penalties for providers and for healthcare payors.

Market forces are another driver of this shift as consumers’ expectations of omni-channel service and access spill over to healthcare. This augurs well for making the patient more central to data flows.

There are unintended consequences, however. The increasing need to openly share data creates new opportunities for hackers to explore, and new risks for health organizations to manage.

It’s more important than ever to have a data governance and proactive cybersecurity strategy that enables free data flow with an optimal security posture. In fact, government regulators will penalize healthcare organizations for non-compliance—and so will the marketplace.

How Microsoft 365 can prepare your organization for the journey ahead

Modernizing legacy systems and processes is a daunting, expensive task. Navigating a digitized but siloed information system is costly, impedes clinician workflow, and complicates patient safety goals.

To this end, Microsoft Teams enables the integration of electronic health record information and other health data, allowing care teams to communicate and collaborate about patient care in real-time. Leading interoperability partners continue to build the ability to integrate electronic health records into Teams through a FHIR interface. With Teams, clinical workers can securely access patient information, chat with other team members, and even have modern meeting experiences, all without having to switch between apps.

Incomplete data and documentation are among the biggest sources of provider and patient dissatisfaction. Clinicians value the ability to communicate with each other securely and swiftly to deliver the best informed care at point of care.

Teams now offers new secure messaging capabilities, including priority notifications and message delegation, as well as a smart camera with image annotation and secure sharing, so images stay in Teams and aren’t stored to the clinician’s device image gallery.

Image of phone screens showing priority notifications and message delegation.

What about cybersecurity and patient data? As legacy infrastructure gives way to more seamless data flow, it’s important to protect against a favorite tactic of cyber criminals—phishing.

Phishing emails—weaponized emails that appear to come from a reputable source or person—are increasingly difficult to detect. As regulatory pressure mounts within healthcare organizations to not “block” access to data, the risk of falling for such phishing attacks is expected to increase. To help mitigate this trend, Office 365 Advanced Threat Protection (ATP) has a cloud-based email filtering service with sophisticated anti-phishing capabilities.

For example, Office 365 ATP provides real-time detonation capabilities to find and block unknown threats, including malicious links and attachments. Links in email are continuously evaluated for user safety. Similarly, any attachments in email are tested for malware and unsafe attachments are removed.

Image of a message appearing on a tablet screen showing a website that has been classified as malicious.

For data to flow freely, it’s important to apply the right governance and protection to sensitive data. And that is premised on appropriate data classification. Microsoft 365 helps organizations find and classify sensitive data across a variety of locations, including devices, apps, and cloud services with Microsoft Information Protection. Administrators need to know that sensitive data is accessed by authorized personnel only. Microsoft 365, through Azure Active Directory (Azure AD), enables capabilities like Multi-Factor Authentication (MFA) and conditional access policies to minimize the risk of unauthorized access to sensitive patient information.

For example, if a user or device sign-in is tagged as high-risk, Azure AD can automatically enforce conditional access policies that can limit or block access or require the user to re-authenticate via MFA. Benefitting from the integrated signals of the Microsoft Intelligent Security Graph, Microsoft 365 solutions look holistically at the user sign-in behavior over time to assess risk and investigate anomalies where needed.

When faced with the prospect of internal leaks, Supervision in Microsoft 365 can help organizations monitor employees’ communications channels to manage compliance and reduce reputational risk from policy violations. As patient data is shared, tracking its flow is essential. Audit log and alerts in Microsoft 365 includes several auditing and reporting features that customers can use to track certain activity such as changes made to documents and other items.

Finally, as you conform with data governance regulatory obligations and audits, Microsoft 365 can assist you in responding to regulators. Advanced eDiscovery and Data Subject Requests (DSRs) capabilities offer the agility and efficiency you need when going through an audit, helping you find relevant patient data or respond to patient information requests.

Using the retention policies of Advanced Data Governance, you can retain core business records in unalterable, compliant formats. With records management capabilities, your core business records can be properly declared and stored with full audit visibility to meet regulatory obligations.

Learn more

Healthcare leaders must adapt quickly to market and regulatory expectations regarding data flows. Clinical and operations leaders depend on data flowing freely to make data-driven business and clinical decisions, to understand patterns in patient care and to constantly improve patient safety, quality of care, and cost management.

Microsoft 365 helps improve workflows through the integration power of Teams, moving the right data to the right place at the right time. Microsoft 365 also helps your security and compliance posture through advanced capabilities that help you manage and protect identity, data, and devices.

Microsoft 365 is the right cloud platform for you in this new era of patient data protection—and data sharing. Check out the Microsoft 365 for health page to learn more about how Microsoft 365 and Teams can empower your healthcare professionals in a modern workplace.

Go to Original Article
Author: Microsoft News Center

Social determinants of health data provide better care

Social determinants of health data can help healthcare organizations deliver better patient care, but the challenge of knowing exactly how to use the data persists.

The healthcare community has long-recognized the importance of a patient’s social and economic data, said Josh Schoeller, senior vice president and general manager of LexisNexis Health Care at LexisNexis Risk Solutions. The current shift to value-based care models, which are ruled by quality rather than quantity of care, has put a spotlight on this kind of data, according to Schoeller.

But social determinants of health also pose a challenge to healthcare organizations. Figuring out how to use the data in meaningful ways can be daunting, as healthcare organizations are already overwhelmed by loads of data.

A new framework, released last month, by the not-for-profit eHealth Initiative Foundation, could help. The framework was developed by stakeholders, including LexisNexis Health Care, to give healthcare organizations guidance on how to use social determinants of health data ethically and securely.

Here’s a closer look at the framework.

Use cases for social determinants of health data

The push to include social determinants of health data into the care process is “imperative,” according to eHealth Initiative’s framework. Doing so can uncover potential risk factors, as well as gaps in care.

The eHealth Initiative’s framework outlines five guiding principles for using social determinants of health data. 

  1. Coordinating care

Determine if a patient has access to transportation or is food is insecure, according to the document. The data can also help a healthcare organization coordinate with community health workers and other organizations to craft individualized care plans.

  1. Using analytics to uncover health and wellness risks

Use social determinants of health data to predict a patient’s future health outcomes. Analyzing social and economic data can help the provider know if an individual is at an increased risk of having a negative health outcome, such as hospital re-admittance. The risk score can be used to coordinate a plan of action.

  1. Mapping community resources and identifying gaps

Use social determinants of health data to determine what local community resources exist to serve the patient populations, as well as what resources are lacking.

  1. Assessing service and impact

Monitor care plans or other actions taken using social determinants of health data and how it correlates to health outcomes. Tracking results can help an organization adjust interventions, if necessary.

  1. Customizing health services and interventions

Inform patients about how social determinants of health data are being used. Healthcare organizations can educate patients on available resources and agree on next steps to take.

Getting started: A how-to for healthcare organizations

The eHealth Initiative is not alone in its attempt to move the social determinants of health data needle.

Niki Buchanan, general manager of population health at Philips Healthcare, has some advice of her own.

  1. Lean on the community health assessment

Buchanan said most healthcare organizations conduct a community health assessment internally, which provides data such as demographics and transportation needs, and identifies at-risk patients. Having that data available and knowing whether patients are willing or able to take advantage of community resources outside of the doctor’s office is critical, she said.

Look for things that meet not only your own internal ROI in caring for your patients, but that also add value and patient engagement opportunities to those you’re trying to serve in a more proactive way.
Niki BuchananGeneral manager of population health management, Philips Healthcare

  1. Connect the community resource dots

Buchanan said a healthcare organization should be aware of what community resources are available to them, whether it’s a community driving service or a local church outreach program. The organization should also assess at what level it is willing to partner with outside resources to care for patients.

“Are you willing to partner with the Ubers of the world, the Lyfts of the world, to pick up patients proactively and make sure they make it to their appointment on time and get them home,” she said. “Are you able to work within the local chamber of commerce to make sure that any time there’s a food market or a fresh produce kind of event within the community, can you make sure the patients you serve have access?”

  1. Start simple

Buchanan said healthcare organizations should approach social determinants of health data with the patient in mind. She recommended healthcare organizations start small with focused groups of patients, such as diabetics or those with other chronic conditions, but that they also ensure the investment is a worthwhile one.

“Look for things that meet not only your own internal ROI in caring for your patients, but that also add value and patient engagement opportunities to those you’re trying to serve in a more proactive way,” she said.

Go to Original Article
Author:

Enterprises that use data will thrive; those that don’t, won’t

There’s a growing chasm between enterprises that use data, and those that don’t.

Wayne Eckerson, founder and principal consultant of Eckerson Group, calls it the data divide, and according to Eckerson, the companies that will thrive in the future are the ones that are already embracing business intelligence no matter the industry. They’re taking human bias out of the equation and replacing it with automated decision-making based on data and analytics.

Those that are data laggards, meanwhile, are already in a troublesome spot, and those that have not embraced analytics as part of their business model at all are simply outdated.

Eckerson has more than 25 years of experience in the BI industry and is the author of two books — Secrets of Analytical Leaders: Insights from Information Insiders and Performance Dashboards: Measuring, Monitoring, and Managing Your Business.  

In the first part of a two-part Q&A, Eckerson discusses the divide between enterprises that use data and those that don’t, as well as the importance of DataOps and data strategies and how they play into the data divide. In the second part, he talks about self-service analytics, the driving force behind the recent merger and acquisition deals, and what intrigues him about the future of BI.

How stark is the data divide, the gap between enterprises that use data and those that don’t?

Wayne Eckerson: It’s pretty stark. You’ve got data laggards on one side of that divide, and that’s most of the companies out there today, and then you have the data elite, the companies [that] were born on data, they live on data, they test everything they do, they automate decisions using data and analytics — those are the companies [that] are going to take the future. Those are the companies like Google and Amazon, but also companies like Netflix and its spinoffs like Stitch Fix. They’re heavily using algorithms in their business. Humans are littered with cognitive biases that distort our perception of what’s going on out there and make it hard for us to make objective, rational, smart decisions. This data divide is a really interesting thing I’m starting to see happening that’s separating out the companies [that] are going to be competitive in the future. I think companies are really racing, spending money on data technologies, data management, data analytics, AI.

How does a DataOps strategy play into the data divide?

Headshot of Wayne Eckerson, founder and principal consultant of Eckerson GroupWayne Eckerson

Eckerson: That’s really going to be the key to the future for a lot of these data laggards who are continually spending huge amounts of resources putting out data fires — trying to fix data defects, broken jobs, these bottlenecks in development that often come from issues like uncoordinated infrastructure for data, for security. There are so many things that prevent BI teams from moving quickly and building things effectively for the business, and a lot of it is because we’re still handcrafting applications rather than industrializing them with very disciplined routines and practices. DataOps is what these companies need — first and foremost it’s looking at all the areas that are holding the flow of data back, prioritizing those and attacking those points.

What can a sound DataOps strategy do to help laggards catch up?

Eckerson: It’s improving data quality, not just at the first go-around when you build something but continuous testing to make sure that nothing is broken and users are using clean, validated data. And after that, once you’ve fixed the quality of data and the business becomes more confident that you can deliver things that make sense to them, then you can use DataOps to accelerate cycle times and build more things faster. This whole DataOps thing is a set of development practices and testing practices and deployment and operational practices all rolled into a mindset of continuous improvement that the team as a whole has to buy into and work on. There’s not a lot of companies doing it yet, but it has a lot of promise.

Data strategy differs for each company given its individual needs, but as BI evolves and becomes more widespread, more intuitive, more necessary no matter the size of the organization and no matter the industry, what will be some of the chief tenets of data strategy going forward?

Eckerson: Today, companies are racing to implement data strategies because they realize they’re … data laggard[s]. In order to not be disrupted in this whole data transformation era, they need a strategy. They need a roadmap and a blueprint for how to build a more robust infrastructure for leveraging data, for internal use, for use with customers and suppliers, and also to embed data and analytics into the products that they build and deliver. The data strategy is a desire to catch up and avoid being disrupted, and also as a way to modernize because there’s been a big leap in the technologies that have been deployed in this area — the web, the cloud, big data, big data in the cloud, and now AI and the ability to move from reactive reporting to proactive predictions and to be able to make recommendations to users and customers on the spot. This is a huge transformation that companies have to go through, and so many of them are starting at zero.

So it’s all about the architecture?

Eckerson: A fundamental part of the data strategy is the data architecture, and that’s what a lot of companies focus on. In fact, for some companies the data strategy is synonymous with the data architecture, but that’s a little shortsighted because there are lots of other elements to a data strategy that are equally important. Those include the organization — the people and how they work together to deliver data capabilities and analytic capabilities — and the culture, because you can build an elegant architecture, you can buy and deploy the most sophisticated tools. But if you don’t have a culture of analytics, if people don’t have a mindset of using data to make decisions, to weigh options to optimize processes, then it’s all for naught. It’s the people, it’s the processes, it’s the organization, it’s the culture, and then, yes, it’s the technology and the architecture too.

Editors’ note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Alluxio updates data orchestration platform, launches 2.0

Alluxio has launched Alluxio 2.0, a platform designed for data engineers who manage and deploy analytical and AI workloads in the cloud.

According to Alluxio, the 2.0 version was built particularly with hybrid and multi-cloud environments in mind, with the aim of providing data orchestration to bring data locality, accessibility and elasticity to compute.

Alluxio 2.0 Community Edition and Enterprise Edition provide a handful of new capabilities, including data orchestration for multi-cloud, compute-optimized data access for cloud analytics, AWS support and architectural foundations using open source.

Data orchestration for multi-cloud

There are three main components to the data orchestration capabilities of Alluxio 2.0: policy-driven data management, administration of data access policies and cross-cloud storage data movement using data service.

Policy-driven data management enables data engineers to automate data movement across different storage systems based on predefined policies. Users can also automate tiering of data across any environment or any number of storage systems. Alluxio claims this will reduce storage costs because the data platform teams will only manage the most important data in the expensive storage systems, while moving less important data to cheaper alternatives.

The administration of data access policies enables users to configure policies at any directory or folder level to streamline data access and workload performance. This includes defining behaviors for individual data sets for core functions, such as writing data or syncing it with Alluxio storage systems.

With cross-cloud storage data movement using data service, Alluxio claims users get highly efficient data movement across cloud stores, such as AWS S3 and Google Cloud services.

Compute-optimized data access for cloud analytics

The compute-optimized data access capabilities include two components: compute-focused cluster partitioning and integration with external data sources over REST.

Compute-focused cluster partitioning enables users to partition a single Alluxio cluster based on any dimension. This keeps data sets within each framework or workload from being contaminated by the other. Alluxio claims that this reduces data transfer costs and constrains data to stay within a specific region or zone.

Integration with external data sources over REST enables users to import data from web-based sources, which can then be aggregated in Alluxio to perform analytics. Users can also direct web locations with files to Alluxio to be pulled in as needed.

AWS support

The new suite provides Amazon Elastic MapReduce (EMR) service integration. According to Alluxio, Amazon EMR is frequently used during the process of moving to cloud services to deploy analytical and AI workloads. Amazon EMR is now available as a data layer within EMR for Spark, Presto and Hive frameworks.

Architectural foundations using open source

According to Alluxio, core foundational elements have been rebuilt using open source technologies. RocksDB is now used for tiering metadata of files and objects for data that Alluxio manages to enable hyperscale. Alluxio uses gRPC as the core transport protocol for communication with clusters, as well as between the client and master.

In addition to the main components, other new features include the following:

  • Alluxio Data Service: A distributed clustered service.
  • Adaptive replication: Configures a range for the number of copies of data stored in Alluxio that are automatically managed.
  • Embedded journal: A fault tolerance and high availability mode for file and object metadata that uses the RAFT consensus algorithm and is separate from other external storage systems.
  • Alluxio POSIX API: A Portable OS Interface-compatible API that enables frameworks such as Tensorflow, Caffe and other Python-based models to directly access data from any storage system through Alluxio using traditional access.

Alluxio 2.0 Community Edition and Enterprise Edition are both generally available now.

Go to Original Article
Author: