Tag Archives: organizations

Consider these Office 365 alternatives to public folders

As more organizations consider a move from Exchange Server, public folders continue to vex many administrators for a variety of reasons.

Microsoft supports public folders in its latest Exchange Server 2019 as well as Exchange Online, but it is pushing companies to adopt some of its newer options, such as Office 365 Groups and Microsoft Teams. An organization pursuing alternatives to public folders will find there is no direct replacement for this Exchange feature. There reason for this is due to the nature of the cloud.

Microsoft set its intentions early on under Satya Nadella’s leadership with its “mobile first, cloud first” initiative back in 2014. Microsoft aggressively expanded its cloud suite with new services and features. This fast pace meant that migrations to cloud services, such as Office 365, would offer a different experience based on the timing. Depending on when you moved to Office 365, there might be different features than if you waited several months. This was the case for migrating public folders from on-premises Exchange Server to Exchange Online, which evolved over time and also coincided with the introduction of Microsoft Teams, Skype for Business and Office 365 Groups.

The following breakdown of how organizations use public folders can help Exchange administrators with their planning when moving to the new cloud model on Office 365.

Organizations that use public folders for email only

Public folders are a great place to store email that multiple people within an organization need to access. For example, an accounting department can use public folders to let department members use Outlook to access the accounting public folders and corresponding email content.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web.

Office 365 offers similar functionality to public folders through its shared mailbox feature in Exchange Online. A shared mailbox stores email in folders, which is accessible by multiple users.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web. This allows users to connect from their smartphones or a standard browser to review email going to the shared mailbox. This differs from public folder access which requires opening the Outlook client.

Organizations that use public folders for email and calendars

For organizations that rely on both email and calendars in their public folders, Microsoft has another cloud alternative that comes with a few extra perks.

Office 365 Groups not only lets users collaborate on email and calendars, but also stores files in a shared OneDrive for Business page, tasks in Planner and notes in OneNote. Office 365 Groups is another option for email and calendars made available on any device. Office 365 Groups owners manage their own permissions and membership to lift some of the burden of security administration from the IT department.

Microsoft provides migration scripts to assist with the move of content from public folders to Office 365 Groups.

Organizations that use public folders for data archiving

Some organizations that prefer to stay with a known quantity and keep the same user experience also have the choice to keep using public folders in Exchange Online.

The reasons for this preference will vary, but the most likely scenario is a company that wants to keep email for archival purposes only. The migration from Exchange on-premises public folders requires administrators to use Microsoft’s scripts at this link.

Organizations that use public folders for project communication and data sharing repository

The Exchange public folders feature is excellent for sharing email, contacts and calendar events. For teams working on projects, the platform shines as a way to centralize information that’s relevant to the specific project or department. But it’s not as expansive as other collaboration tools on Office 365.

Take a closer look at some of the other modern collaboration tools available in Office 365 in addition to Microsoft Teams and Office 365 Groups, such as Kaizala. These offerings extend the organization’s messaging abilities to include real-time chat, presence status and video conferencing.

Go to Original Article
Author:

What are the Azure Stack HCI deployment, management options?

There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.

Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.

Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.

Tailor the Azure Stack HCI options for different needs

The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.

If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.

Management tools for Azure Stack HCI systems

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.

Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.

Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.

It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.

Go to Original Article
Author:

What’s new with the Exchange hybrid configuration wizard?

Exchange continues to serve as the on-ramp into Office 365 for many organizations. One big reason is the hybrid capabilities that connect on-premises Exchange and Exchange Online.

If you use Exchange Server, it’s not difficult to join it to Exchange Online for a seamless transition into the cloud. Microsoft refined the Exchange hybrid configuration wizard to remove a lot of the technical hurdles to shift one of the more important IT workloads into Exchange Online. If you haven’t seen the Exchange hybrid experience recently, you may be surprised about some of the improvements over the last few years.

Exchange hybrid setups have come a long way

I started configuring Exchange hybrid deployments the first week Microsoft made Office 365 publicly available in June 2011 with the newest version of Exchange at the time, Exchange 2010. Setting up an Exchange hybrid deployment was a laborious task. Microsoft provided a 75-page document with the Exchange hybrid configuration steps, which would take about three workdays to complete. Then I could start the troubleshooting process to fix the innumerable typos I made during the setup.

In December 2011, Microsoft released Exchange 2010 Service Pack 2, which included the Exchange hybrid configuration wizard. The wizard reduced that 75-page document to a few screens of information that cut down the work from three days to about 15 minutes. The Exchange hybrid configuration wizard did not solve all the problems of an Exchange hybrid deployment, but it made things a lot easier.

What the Exchange hybrid configuration wizard does

The Exchange hybrid configuration wizard is just a PowerShell script that runs all the necessary configuration tasks. The original hybrid configuration wizard completed seven key tasks:

  1. verified prerequisites for a hybrid deployment;
  2. configured Exchange federation trust;
  3. configured relationships between on-premises Exchange and Exchange Online;
  4. configured email address policies;
  5. configured free/busy calendar sharing;
  6. configured secure mail flow between the on-premises and Exchange Online organizations; and
  7. enabled support for Exchange Online archiving.

How the Exchange hybrid configuration wizard evolved

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Exchange hybrid configuration wizard decoupled from service pack updates: This may seem like a minor change, but it’s a significant development. Having the Exchange hybrid configuration wizard as part of the standard Exchange update cycle meant that any updates to the wizard had to wait until the next service pack update.

Now the Exchange hybrid configuration wizard is an independent component from Exchange Server. When you run the wizard, it checks for a new release and updates itself to the most current configuration. This means you get fixes or additional features without waiting through that quarterly update cycle.

Minimal hybrid configuration: Not every migration has the same requirements. Sometimes a quicker migration with fewer moving parts is needed, and Microsoft offered an update in 2016 for a minimal hybrid configuration feature for those scenarios.

The minimal hybrid configuration helps organizations that cannot use the staged migration option, but want an easy switchover without worrying about configuring extras, such has the free/busy federation in calendar availability.

The minimal hybrid configuration leaves out the following functionality from a full hybrid configuration:

  • cross-premises free/busy calendar availability;
  • Transport Layer Security secured mail flow between on-premises Exchange and Exchange Online;
  • cross-premises eDiscovery;
  • automatic Outlook on the web (OWA) and ActiveSync redirection for migrated users; and
  • automatic retention for archived mailboxes.

If these features aren’t important to your organization and speed is of the essence, the minimal hybrid configuration is a good option.

Recent update goes further with setup work

Microsoft designed the Exchange hybrid configuration wizard to migrate mailboxes without interrupting the end user’s ability to work. The wizard gives users a full global address book, free/busy calendar availability and some of the mailbox delegation features used with an on-premises Exchange deployment.

A major new addition to the hybrid configuration wizard its ability to transfer some of the on-premises Exchange configurations to the Exchange Online tenant. The Hybrid Organization Configuration Transfer feature pulls configuration settings from your Exchange organization and does a one-time setup of the same settings in your Exchange Online tenant.

Microsoft expanded the abilities of Hybrid Organization Configuration Transfer in November 2018 so it configures the following settings: Active Sync Mailbox Policy, Mobile Device Mailbox Policy, OWA Mailbox Policy, Retention Policy, Retention Policy Tag, Active Sync Device Access Rule, Active Sync Organization Settings, Address List, DLP Policy, Malware Filter Policy, Organization Config and Policy Tip Configuration.

The Exchange hybrid configuration wizard only handles these settings once. If you make changes in your on-premises Exchange organization after you run the Exchange hybrid configuration wizard, those changes will not be replicated in the cloud automatically.

Go to Original Article
Author:

ONC urged to slow down for the sake of patient data security

Seven healthcare leadership organizations have called for federal agencies to slow down their work on proposed interoperability and information blocking rules, which are expected to be finalized by the end of 2019. Their major concern is patient data security.

In a letter to the House Committee on Energy and Commerce, healthcare organizations including the American Medical Association (AMA), the College of Healthcare Information Management Executives (CHIME) and the American Health Information Management Association (AHIMA) outlined their concerns with security of healthcare data apps and a lack of security guidelines enabling third-party access to patient data.

They also worry there will be confusion about exceptions to information blocking and are concerned about implementation timelines for regulation requirements.

In February, the Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare and Medicaid Services (CMS) proposed rules that would require healthcare organizations to use FHIR-enabled APIs to share data with healthcare apps. They also seek to define exceptions to information blocking, or unreasonably preventing patient data from being shared. The goal of the proposed rules is to foster greater data sharing and easier patient access to healthcare data.

“The use of APIs and third-party applications has the potential to improve patient and provider access to needed health information,” the letter said. “It also brings us into uncharted territory as patients leave the protections of HIPAA behind.”

The organizations stated that they support the work to improve information sharing through the use of APIs, but they noted it is “imperative that policies be put in place to prevent inappropriate disclosures to third-parties and resultant harm to patients.”

Letter underscores patient data security concern

It’s not the first time ONC has heard concerns about patient data security.

During a U.S. Senate Committee on Health, Education, Labor and Pensions meeting in May, committee chairman Sen. Lamar Alexander cautioned ONC to take interoperability slow and address issues such as privacy concerns when downloading patient data to healthcare apps.

The letter echoes that caution, suggesting that certified APIs should be required to have more security features and provide patients with privacy notices and transparency statements about whether data will be disclosed or sold.  Additionally, the letter notes a lack of security guidelines for providers as they bring third-party apps into their systems, and urges ONC to require API vendors to mitigate threats and security issues that could impact the provider connected to the API.

While healthcare apps and patient data security is the biggest sticking point, healthcare leaders also outlined other areas of concern such as “reasonable timelines” for implementing the final rules, and making exceptions to information blocking clearer. The healthcare leaders asked that ONC provide more examples of actions that would satisfy the exception requirements before the final rules are implemented.

‘Getting it right’

Healthcare leaders then requested ONC continue with the rulemaking process instead of finalizing the rules as they are now, and take more time to work through the issues outlined in the letter.

Lauren Riplinger, vice president of policy and government affairs at AHIMA, said the letter is a formal message to Congress to stress the importance of slowing down and “getting it right.”

She wants the community to “make sure we’re defining things properly, that the implementation periods make sense, and that it’s reflective of the environment and landscape in which we’re currently at as we work toward implementation of these final rules — whenever it gets finalized.”

They say Mars, and this letter says Hawaii. Eventually, everyone will say the moon. That’s where we’re headed.
John HalamkaExecutive director of the health technology exploration center, Beth Israel Lahey Health

In response to the letter, ONC prepared a statement that said the organization is “mindful of the need to balance concerns of incumbent stakeholders with the rights of patients to have transparency and actionable choice in their healthcare.”

John Halamka, executive director of the health technology exploration center at Beth Israel Lahey Health in Boston, said when it comes to rulemaking, it’s better for ONC to ask for Mars and settle for the moon, which he said was the intended goal to begin with.

Because it’s part of the rulemaking process, federal agencies no doubt anticipated pushback from the healthcare community, Halamka said. Ultimately, he believes ONC is headed in the right direction, and the letter asking for the time necessary to work through the details is understandable. Fine tuning of the proposed rules, or sub-regulatory guidance, is crucial, he said. “They say Mars, and this letter says Hawaii,” Halamka said. “Eventually, everyone will say the moon. That’s where we’re headed.”

Go to Original Article
Author:

CIO talks lessons learned from Meditech Expanse upgrade

Healthcare organizations may no longer be shopping for EHRs the way they once were, but that doesn’t make implementation any easier.

It took three years of planning and budgeting before Beth Israel Deaconess Medical Center went live with electronic health record vendor Meditech’s latest product at three community hospitals.

Jeannette Currie, CIO of community hospitals at Beth Israel Deaconess Medical Center in Boston, led the initiative to upgrade to the latest version: Meditech Expanse, a web-based EHR designed for mobility. The effort took a year longer than expected.

At the recent Meditech Physician and CIO Forum in Boston, Currie detailed challenges she faced before and during the implementation at the Beth Israel Deaconess Medical Center (BIDMC) community hospitals — and some of the lessons she learned along the way. Her biggest goal was to create a unified IT culture across the three community hospitals which had, up until this point, operated independent IT shops.

For Maurice Abney, CIO at LifePoint Health in Brentwood, Tenn., who attended the forum, his biggest takeaway was how Currie’s budget changed significantly when planning for an EHR implementation, and how it’s better to plan for spending more rather than less.

This was a confirmation that you need to budget it now so you won’t have to ask for it later.
Maurice AbneyCIO, LifePoint Health

“This was a confirmation that you need to budget it now so you won’t have to ask for it later,” Abney said.

Challenges with EHR implementation

In 2015, BIDMC decided to upgrade the Meditech EHR at three community hospitals and had an estimated go-live date of Oct. 1, 2017. BIDMC’s goal was to reduce the number of outpatient EHRs from multiple vendors used in its community hospitals by migrating the sites to a single EHR from a single vendor. The community hospitals also all used different versions of the Meditech EHR.

BIDMC, now part of Beth Israel Lahey Health following a merger earlier this year, is a healthcare system composed of academic medical centers, teaching hospitals, community hospitals and specialty hospitals that employs more than 4,000 physicians and 35,000 employees. It is now one of the largest health systems in Boston.

As she planned the EHR implementation project, Currie said delays occurred due to added project scope and additional software requirements that were missing from the original plans. Plus, while BIDMC initially planned to upgrade the community hospitals to the Meditech 6.1 platform, an earlier version of the Meditech EHR, the health system changed its mind and decided on Meditech Expanse, the latest EHR version.

Even with budgeting and planning, the go-live date was pushed back a year, and the project’s estimated budget nearly doubled from an estimated $14.7 million to an actual budget of $27.3 million.

Strategies for addressing challenges

As Currie prepared to unify the three hospitals onto one EHR, she encountered four major challenges: resistance to change and getting the hospitals past the idea that the new EHR implementation was a simple update to their existing Meditech EHRs, breaking down the hospitals’ history of separateness, consolidating IT staff and creating a clear pathway for decision-making involving all three entities.

Jeannette Currie, CIO of Community Hospitals at Beth Israel Deaconess Medical Center, speaks at the recent MEDITECHPhysician and CIO Forum.
Jeannette Currie, CIO of Community Hospitals at Beth Israel Deaconess Medical Center, speaks at the MEDITECHPhysician and CIO Forum about leading a MEDITECHExpanse implementation at three community hospitals.

This wasn’t the community hospitals’ first Meditech EHR implementation, but upgrading to Meditech Expanse was complicated by the EHR’s added features and functions, according to Currie. The product introduced new workflows and an entirely new platform. Currie said getting the hospitals past that “upgrade mentality” was challenging.

To address the problem, Currie decided to brand the implementation CommunityONE. Her hope in using the word “community” was to steer the upgrade away from EHR tweaks toward a push to unify the IT culture between the three hospitals, something she said was crucial to the project’s success.

She set a mission statement for the project, which outlined what she was aiming to do and why. The mission statement, “to develop, implement and manage a single patient-focused BIDMC Community Hospital EHR using principles of best practice to support clinical excellence, fiscal accountability and a productive experience,” was repeated and promoted throughout the project.

Identifying the benefits of the Meditech Expanse product was also important, Currie said. The gains included a single patient clinical record accessible across the three hospitals, operational efficiency by having the same EHR available for clinicians working at all three hospitals, working with Meditech to house the hospitals’ data, and the creation of a single IT department for the three hospitals.

Consolidating IT staff was a major hurdle because of varying staffing levels, experience and pay scales, Currie said. She worked to fix pay discrepancies and to clearly define IT responsibilities, something the organization is still challenged with. Currie said employees were chosen from across the three sites to form the community hospitals IT department.  

Currie established guiding principles to lead the major organizational change. They included clear project governance structured to promote the project mission. She wanted to make sure to give an equal voice to each hospital, outline participation expectations and be transparent about decisions.

“We needed all the hospitals to participate in the process to create that future. That adds to the cultural aspect because then people feel ownership about what they’re creating and what their end product will be,” she said.

Decision making was the project’s biggest challenge and one of the biggest drivers behind the extended go-live date, Currie said. Each organization came to the table with “passion” for the way their hospital had operated, and they had to work through how they were going to make decisions as a unified IT culture. 

“We had to learn how to reach consensus,” she said.

Currie said she outlined a clear method for decision making, and built the culture through continuous face time and getting to know each other.

“It was a pain in the butt to drive from Plymouth or some of these other areas in Boston traffic to get together,” she said. “But we really found that that in-person time was what promoted respect … people on these teams became friends and that allowed them to work together and become willing to share this system and respect each other’s perspectives.” 

Lessons learned

On Oct. 1, 2018, Meditech Expanse went live at all three hospitals.

Currie said the launch’s success was due to a strong command structure including local command centers set up at each of the sites that were linked to help identify common issues. The IT team also had frequent huddles, identified emerging issues and had boots on the ground to provide support.

At the center of the success was communication, and keeping a consistent message between the three hospitals, she said.

Go to Original Article
Author:

Schlumberger, Chevron and Microsoft announce collaboration to accelerate digital transformation – Stories

Global organizations will work together to accelerate development of cloud-native solutions and deliver actionable data insights for the industry

MONACO September 17, 2019 — Tuesday at the SIS Global Forum 2019, Schlumberger, Chevron and Microsoft. announced the industry’s first three-party collaboration to accelerate creation of innovative petrotechnical and digital technologies.

Data is quickly emerging as one of the most valuable assets to any company yet extracting insights from it is often difficult as information gets trapped in internal silos. As part of the collaboration, the three companies will work together to build Azure-native applications in the DELFI* cognitive E&P environment initially for Chevron, which will enable companies to process, visualize, interpret and ultimately obtain meaningful insights from multiple data sources.

DELFI* is a secure, scalable and open cloud-based environment providing seamless E&P software technology across exploration, development, production and midstream. Chevron and Schlumberger will combine their expertise and resources to accelerate the deployment of DELFI solutions in Azure, with support and guidance from Microsoft. The parties will ensure the software developments meet the latest standards in terms of security, performance, release management, and are compatible with the Open Subsurface Data Universe (OSDU) Data Platform. Building on this open foundation will amplify the capabilities of Chevron’s petrotechnical experts.

The collaboration will be completed in three phases starting with the deployment of the Petrotechnical Suite in the DELFI environment, followed by the development of cloud-native applications on Azure, and the co-innovation of a suite of cognitive computing native capabilities across the E&P value chain tailored to Chevron’s objectives.

Olivier Le Peuch, chief executive officer, Schlumberger, said, “Combining the expertise of these three global enterprises creates vastly improved and digitally enabled petrotechnical workflows. Never before has our industry seen a collaboration of this kind, and of this scale. Working together will accelerate faster innovation with better results, marking the beginning of a new era in our industry that will enable us to elevate performance across our industry’s value chain.”

“There is an enormous opportunity to bring the latest cloud and AI technology to the energy sector and accelerate the industry’s digital transformation,” said Satya Nadella, CEO of Microsoft. “Our partnership with Schlumberger and Chevron delivers on this promise, applying the power of Azure to unlock new AI-driven insights that will help address some of the industry’s—the world’s—most important energy challenges, including sustainability.”

Joseph C. Geagea, executive vice president, technology, projects and services, Chevron, said, “We believe this industry-first advancement will dramatically accelerate the speed with which we can analyze data to generate new exploration opportunities and bring prospects to development more quickly and with more certainty. It will pull vast quantities of information into a single source amplifying our use of artificial intelligence and high-performance computing built on an open data ecosystem.”

About Schlumberger

Schlumberger is the world’s leading provider of technology for reservoir characterization, drilling, production, and processing to the oil and gas industry. With product sales and services in more than 120 countries and employing approximately 100,000 people who represent over 140 nationalities, Schlumberger supplies the industry’s most comprehensive range of products and services, from exploration through production, and integrated pore-to-pipeline solutions that optimize hydrocarbon recovery to deliver reservoir performance.

Schlumberger Limited has executive offices in Paris, Houston, London, and The Hague, and reported revenues of $32.82 billion in 2018. For more information, visit.

About Chevron

Chevron Corporation is one of the world’s leading integrated energy companies. Through its subsidiaries that conduct business worldwide, the company is involved in virtually every facet of the energy industry. Chevron explores for, produces and transports crude oil and natural gas; refines, markets and distributes transportation fuels and lubricants; manufactures and sells petrochemicals and additives; generates power; and develops and deploys technologies that enhance business value in every aspect of the company’s operations. Chevron is based in San Ramon, Calif. More information about Chevron is available at www.chevron.com.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

###

*Mark of Schlumberger

For further information, contact:

Moira Duff
Corporate Communication Manager−Western Hemisphere
Schlumberger
Tel: +1 281 285 4376
[email protected]

Sean Comey
Sr. Advisor, External Affairs
Chevron
Tel: +1 925 842 5509
[email protected]

Microsoft Media Relations
WE Communications for Microsoft
(425) 638-7777
[email protected]

Go to Original Article
Author: Microsoft News Center

Transition to value-based care requires planning, communication

Transitioning to value-based care can be a tough road for healthcare organizations, but creating a plan and focusing on communication with stakeholders can help drive the change.

Value-based care is a model that rewards the quality rather than the quantity of care given to patients. The model is a significant shift from how healthcare organizations have functioned, placing value on the results of care delivery rather than the number of tests and procedures performed. As such, it demands that healthcare CIOs be thoughtful and deliberate about how they approach the change, experts said during a recent webinar hosted by Definitive Healthcare.

Andrew Cousin, senior director of strategy at Mayo Clinic Laboratories, and Aaron Miri, CIO at the University of Texas at Austin Dell Medical School and UT Health Austin, talked about their strategies for transitioning to value-based care and focusing on patient outcomes.

Cousin said preparedness is crucial, as organizations can jump into a value-based care model, which relies heavily on analytics, without the institutional readiness needed to succeed.  

“Having that process in place and over-communicating with those who are going to be impacted by changes to workflow are some of the parts that are absolutely necessary to succeed in this space,” he said.

Mayo Clinic Labs’ steps to value-based care

Cousin said his primary focus as a director of strategy has been on delivering better care at a lower cost through the lens of laboratory medicine at Mayo Clinic Laboratories, which provides laboratory testing services to clinicians.

Andrew Cousin, senior director of strategy, Mayo Clinic LaboratoriesAndrew Cousin

That lens includes thinking in terms of a mathematical equation: price per test multiplied by the number of tests ordered equals total spend for that activity. Today, much of a laboratory’s relationship with healthcare insurers is measured by the price per test ordered. Yet data shows that 20% to 30% of laboratory testing is ordered incorrectly, which inflates the number of tests ordered as well as the cost to the organization, and little is being done to address the issue, according to Cousin.

That was one of the reasons Mayo Clinic Laboratories decided to focus its value-based care efforts on reducing incorrect test ordering.

To mitigate the errors, Cousin said the lab created 2,000 evidence-based ordering rules, which will be integrated into a clinician’s workflow. There are more than 8,000 orderable tests, and the rules provide clinicians guidance at the start of the ordering process, Cousin said. The laboratory has also developed new datasets that “benchmark and quantify” the organization’s efforts.  

To date, Cousins said the lab has implemented about 250 of the 2,000 rules across the health system, and has identified about $5 million in potential savings.

Cousin said the lab crafted a five-point plan to begin the transition. The plan was based on its experience in adopting a value-based care model in other areas of the lab. The first three steps center on what Cousin called institutional readiness, or ensuring staff and clinicians have the training needed to execute the new model.

The plan’s first step is to assess the “competencies and gaps” of care delivery within the organization, benchmarking where the organization is today and where gaps in care could be closed, he said.

The second step is to communicate with stakeholders to explain what’s going to happen and why, what criteria they’ll be measured on and how, and how the disruption to their workflow will result in improving practice and financial reimbursement.

The third step is to provide education and guidance. “That’s us laying out the plans, training the team for the changes that are going to come about through the infusion of new algorithms and rules into their workflow, into the technology and into the way we’re going to measure that activity,” he said.

Cousin said it’s critical to accomplish the first three steps before moving on to the fourth step: launching a value-based care analytics program. For Mayo Clinic Laboratories, analytics are used to measure changes in laboratory test ordering and assess changes in the elimination of wasteful and unnecessary testing.

The fifth and final step focuses on alternative payments and collaboration with healthcare insurers, which Cousin described as one of the biggest challenges in value-based care. The new model requires a new kind of language that the payers may not yet speak.

Mayo Clinic Laboratories has attempted to address this challenge by taking its data and making it as understandable to payers as possible, essentially translating clinical data into claims data.     

Cousin gave the example of showing payers how much money was saved by intervening in over-ordering of tests. Presenting data as cost savings can be more valuable than documenting how many units of laboratory tests ordered it eliminated, he said.

How a healthcare CIO approaches value-based care

UT Health Austin’s Miri approaches value-based care from both the academic and the clinical side. UT Health Austin functions as the clinical side of Dell Medical School.

Aaron Miri, CIO at the University of Texas at Austin Dell Medical School and UT Health Austin Aaron Miri

The transition to value-based care in the clinical setting started with a couple of elements. Miri said, first and foremost, healthcare CIOs will need buy-in at the top. They also will need to start simple. At UT Health Austin, simple meant introducing a new patient-reported outcomes program, which aims to collect data from patients about their personal health views.

UT Health Austin has partnered with Austin-based Ascension Healthcare to collect patient reported outcomes as well as social determinants of health, or a patient’s lifestyle data. Both patient reported outcomes and social determinants of health “make up the pillars of value-based care,” Miri said.  

The effort is already showing results, such as a 21% improvement in the hip disability and osteoarthritis outcome score and a 29% improvement in the knee injury and osteoarthritis outcome score. Miri said the organization is seeing improvement because the organization is being more proactive about patient outcomes both before and after discharge.  

For the program to work, Miri and his team needs to make the right data available for seamless care coordination. That means making sure proper data use agreements are established between all UT campuses, as well as with other health systems in Austin.   

Value-based care data enables UT Health Austin to “produce those outcomes in a ready way and demonstrate that back to the payers and the patients that they’re actually getting better,” he said.

In the academic setting at Dell Medical School, Miri said the next generations of providers are being prepared for a value-based care world.

“We offer a dual master’s track academically … to teach and integrate value-based care principles into the medical school curriculum,” Miri said. “So we are graduating students — future physicians, future surgeons, future clinicians — with value-based at the core of their basic medical school preparatory work.”

Go to Original Article
Author:

Data integration problems a hurdle companies must overcome

As organizations try to analyze the vast amounts of information they’ve collected, they need to overcome data integration problems before they can extract meaningful insights.

Decades of data exist for enterprises that have stood the test of time, and it’s often housed in different locales and spread across disparate systems.

The scope of the business intelligence that enterprises glean from it in that haphazard form is limited. Attempting to standardize the data, meanwhile, can be overwhelming.

Enter vendors that specialize in solving data integration issues, whose service is helping other companies curate the vast amounts of information they possess and put it in a place — and in a format — where it can be accessed and used to produce meaningful BI.

Cloud data integration provider Talend, along with others such as Informatica and MuleSoft, recently acquired by Salesforce, is one such vendor.

In the second part of a two-part Q&A, Talend CEO discusses different data integration problems large enterprises face compared with their small and midsize brethren, as well as Talend’s strategy in helping companies address their sudden abundance of data.

In part one, Tuchen talks about the massive challenges that have developed over the last 10 to 15 years as organizations have begun to digitize and pool their data.

Are there different data integration problems a small- to medium-sized business might face compared to a large organization in terms of extracting data from a vast pool of information it has collected over the years?

Mike Tuchen, CEO of TalendMike Tuchen

Mike Tuchen: For a small or medium-sized company, for the most part they know where their systems are. There’s a much more human understandable set of sources where you’re going to get your data from, so for the most part cataloging for them isn’t required upfront. It’s something you can choose to do later and optionally. They can say, ‘I’m going to pull data from using Salesforce and NetSuite, and HubSpot and Salesforce and NetSuite, and Zendesk for support.’ They can pull data from all those systems, make sure they have a consistent definition of who’s a customer and what they’re doing, and then can start analyzing what the most effective campaigns are, who the most likely customers to convert are, who the most likely customers to retain or upsell are, or whatever they’re trying to do with the core analytics. Since you have a small number of systems — a small number of sources — you can go directly there and it turns more into a ‘let’s drive the integration process, let’s drive the cleaning process’ and the initial cleaning process is a simpler problem.

So in essence, even though they may not have the financial wherewithal to invest in a team of data scientists, is the process of solving data integration issues actually easier for them?

Tuchen: For sure. Size creates complexity. It creates an opportunity as well, but the bigger you get the more sources. Think about at one end of the spectrum you’ve got a large multinational company that has a whole bunch of different divisions spread out across the world, some of them brought in through acquisitions. Think about the plethora of different sources you have. We’re working with a customer that has a dozen different ERP systems that they’ve done and that they’re now trying to bring data together from, and that’s just in one type of data — transactional data around financial transactions. Think about that kind of complexity versus a small company.

What is the core service Talend provides?

Tuchen: Talend is a data integration company, and our core approach is to help companies collect, govern, transform and share their data. What we’re seeing is that data, more and more, is becoming a critical strategic asset. We’re seeing, worldwide, that as companies are more and more digitized they’re seeing that data managed correctly is a competitive advantage, and at the heart of every single industry is a strategic data battle that if you solve that well there’s an advantage and you’ll be out executing your competitors. With that recognition, the importance of the problem that we’re solving is going up in our customers’ minds, and that creates an opportunity for us.

How does what Talend does help customers overcome data integration problems?

Tuchen: We have a cloud-based offering called Talend Data Fabric that includes a number of different components, including a lot of the different capabilities we talked about. There’s a data catalog that solves that discovery process and the data definition issue, making sure that we have a consistent definition, lineage of where does data start and where does it end, what happens to it along the way so you can understand impact analysis, and so on. That’s one part of our offering. And we have an [application programming interface] offering that allows you to share that with customers or partners or suppliers.

As you look at where data integration and mining are headed, what is Talend’s roadmap for the next one to three years?

Tuchen: Right now we’re doubling and tripling down on the cloud. Our cloud business is exploding. It’s growing well over 100% a year. What we’re seeing is the entire IT landscape is moving to the cloud. In particular in the data analytics, data warehouses, just over the last couple of years we’ve reached the tipping point. Now we’re at the point where cloud data warehouses are significantly better than anything you can get on premises — they’re higher performance, more flexible, more scalable, you can plug in machine learning, you can plug in real-time flows to them, there’s no upfront commitment, they’re always up to date. It’s now at the point where the benefits are so dramatic that every company in the world has either moved or is planning to move and do most of their analytical processing in the cloud. That creates an enormous opportunity for us, and one that we’re maniacally focused on. We’re putting an enormous amount of effort into maintaining and extending our leadership in cloud-based data integration and governance.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

SolarWinds Discovery offers low-cost way to manage IT assets

IT asset management software vendor SolarWinds launched a software service aimed at IT organizations that seek a low-cost way to keep tabs on IT assets and improve their service delivery.

The software, called SolarWinds Discovery, is a SaaS tool designed to help IT teams locate, map and manage their software and hardware assets. The software combines both agent and agentless technology to provide a view into critical assets, providing insights for IT pros who manage and monitor those assets.

“IT service delivery requires managing the lifecycle of the technology that enables customers to meet their needs,” said Gartner analyst Roger Williams. “Many organizations, however, do not have visibility into everything on their network due to poor controls and an inability to keep up with the pace of change in their environment.”

The agentless SolarWinds Discovery Scanner locates and collects information on IP connected devices, like servers, routers, switches, firewalls, storage arrays, VMware hosts, VMs and printers, according to the company.

The SolarWinds Discovery agent can collect more than 200 data points from Windows and Apple computers and servers, as well as iOS and Android mobile devices. The software integrates with Microsoft System Center Configuration Manager, VMware vCenter and Chrome OS.

The new service integrates with SolarWinds Service Desk, enabling enterprises to focus on risks affecting IT services, as well as to comply with software licensing contracts. The tool can also import data from key configuration management sources, enabling organizations to regularly update all their asset data and make it available within SolarWinds Service Desk, the company said.

The product was launched on August 21 and is available only under SolarWinds Service Desk. Cost is per month, per agent and billed annually. A free one-month trial is available on the company’s website.

Williams said IT asset management is part of Gartner’s Software Asset Management, IT Asset Management and IT Financial Management category that saw $1.24 billion in revenue in 2018, a 23.4% increase over 2017.

He said vendors in this market are challenged to offer a product with features unique from what is already available, considering there are over 100 competitors. He said SolarWinds has the largest presence of any vendor in the related network performance monitoring and diagnostics market and is used as a discovery source by many organizations.

Competitors include ManageEngine, BMC Software and IBM, to cite a few examples. ManageEngine ServiceDesk combines asset management and help desk functionalities in one platform. BMC Helix offers digital and cognitive automation technologies intended to provide efficient service management across any environment. IBM Maximo offers a tool to analyze IoT data from people, sensors and devices to gain asset visibility.

Go to Original Article
Author:

Social determinants of health data provide better care

Social determinants of health data can help healthcare organizations deliver better patient care, but the challenge of knowing exactly how to use the data persists.

The healthcare community has long-recognized the importance of a patient’s social and economic data, said Josh Schoeller, senior vice president and general manager of LexisNexis Health Care at LexisNexis Risk Solutions. The current shift to value-based care models, which are ruled by quality rather than quantity of care, has put a spotlight on this kind of data, according to Schoeller.

But social determinants of health also pose a challenge to healthcare organizations. Figuring out how to use the data in meaningful ways can be daunting, as healthcare organizations are already overwhelmed by loads of data.

A new framework, released last month, by the not-for-profit eHealth Initiative Foundation, could help. The framework was developed by stakeholders, including LexisNexis Health Care, to give healthcare organizations guidance on how to use social determinants of health data ethically and securely.

Here’s a closer look at the framework.

Use cases for social determinants of health data

The push to include social determinants of health data into the care process is “imperative,” according to eHealth Initiative’s framework. Doing so can uncover potential risk factors, as well as gaps in care.

The eHealth Initiative’s framework outlines five guiding principles for using social determinants of health data. 

  1. Coordinating care

Determine if a patient has access to transportation or is food is insecure, according to the document. The data can also help a healthcare organization coordinate with community health workers and other organizations to craft individualized care plans.

  1. Using analytics to uncover health and wellness risks

Use social determinants of health data to predict a patient’s future health outcomes. Analyzing social and economic data can help the provider know if an individual is at an increased risk of having a negative health outcome, such as hospital re-admittance. The risk score can be used to coordinate a plan of action.

  1. Mapping community resources and identifying gaps

Use social determinants of health data to determine what local community resources exist to serve the patient populations, as well as what resources are lacking.

  1. Assessing service and impact

Monitor care plans or other actions taken using social determinants of health data and how it correlates to health outcomes. Tracking results can help an organization adjust interventions, if necessary.

  1. Customizing health services and interventions

Inform patients about how social determinants of health data are being used. Healthcare organizations can educate patients on available resources and agree on next steps to take.

Getting started: A how-to for healthcare organizations

The eHealth Initiative is not alone in its attempt to move the social determinants of health data needle.

Niki Buchanan, general manager of population health at Philips Healthcare, has some advice of her own.

  1. Lean on the community health assessment

Buchanan said most healthcare organizations conduct a community health assessment internally, which provides data such as demographics and transportation needs, and identifies at-risk patients. Having that data available and knowing whether patients are willing or able to take advantage of community resources outside of the doctor’s office is critical, she said.

Look for things that meet not only your own internal ROI in caring for your patients, but that also add value and patient engagement opportunities to those you’re trying to serve in a more proactive way.
Niki BuchananGeneral manager of population health management, Philips Healthcare

  1. Connect the community resource dots

Buchanan said a healthcare organization should be aware of what community resources are available to them, whether it’s a community driving service or a local church outreach program. The organization should also assess at what level it is willing to partner with outside resources to care for patients.

“Are you willing to partner with the Ubers of the world, the Lyfts of the world, to pick up patients proactively and make sure they make it to their appointment on time and get them home,” she said. “Are you able to work within the local chamber of commerce to make sure that any time there’s a food market or a fresh produce kind of event within the community, can you make sure the patients you serve have access?”

  1. Start simple

Buchanan said healthcare organizations should approach social determinants of health data with the patient in mind. She recommended healthcare organizations start small with focused groups of patients, such as diabetics or those with other chronic conditions, but that they also ensure the investment is a worthwhile one.

“Look for things that meet not only your own internal ROI in caring for your patients, but that also add value and patient engagement opportunities to those you’re trying to serve in a more proactive way,” she said.

Go to Original Article
Author: