Tag Archives: creating

Transition to value-based care requires planning, communication

Transitioning to value-based care can be a tough road for healthcare organizations, but creating a plan and focusing on communication with stakeholders can help drive the change.

Value-based care is a model that rewards the quality rather than the quantity of care given to patients. The model is a significant shift from how healthcare organizations have functioned, placing value on the results of care delivery rather than the number of tests and procedures performed. As such, it demands that healthcare CIOs be thoughtful and deliberate about how they approach the change, experts said during a recent webinar hosted by Definitive Healthcare.

Andrew Cousin, senior director of strategy at Mayo Clinic Laboratories, and Aaron Miri, CIO at the University of Texas at Austin Dell Medical School and UT Health Austin, talked about their strategies for transitioning to value-based care and focusing on patient outcomes.

Cousin said preparedness is crucial, as organizations can jump into a value-based care model, which relies heavily on analytics, without the institutional readiness needed to succeed.  

“Having that process in place and over-communicating with those who are going to be impacted by changes to workflow are some of the parts that are absolutely necessary to succeed in this space,” he said.

Mayo Clinic Labs’ steps to value-based care

Cousin said his primary focus as a director of strategy has been on delivering better care at a lower cost through the lens of laboratory medicine at Mayo Clinic Laboratories, which provides laboratory testing services to clinicians.

Andrew Cousin, senior director of strategy, Mayo Clinic LaboratoriesAndrew Cousin

That lens includes thinking in terms of a mathematical equation: price per test multiplied by the number of tests ordered equals total spend for that activity. Today, much of a laboratory’s relationship with healthcare insurers is measured by the price per test ordered. Yet data shows that 20% to 30% of laboratory testing is ordered incorrectly, which inflates the number of tests ordered as well as the cost to the organization, and little is being done to address the issue, according to Cousin.

That was one of the reasons Mayo Clinic Laboratories decided to focus its value-based care efforts on reducing incorrect test ordering.

To mitigate the errors, Cousin said the lab created 2,000 evidence-based ordering rules, which will be integrated into a clinician’s workflow. There are more than 8,000 orderable tests, and the rules provide clinicians guidance at the start of the ordering process, Cousin said. The laboratory has also developed new datasets that “benchmark and quantify” the organization’s efforts.  

To date, Cousins said the lab has implemented about 250 of the 2,000 rules across the health system, and has identified about $5 million in potential savings.

Cousin said the lab crafted a five-point plan to begin the transition. The plan was based on its experience in adopting a value-based care model in other areas of the lab. The first three steps center on what Cousin called institutional readiness, or ensuring staff and clinicians have the training needed to execute the new model.

The plan’s first step is to assess the “competencies and gaps” of care delivery within the organization, benchmarking where the organization is today and where gaps in care could be closed, he said.

The second step is to communicate with stakeholders to explain what’s going to happen and why, what criteria they’ll be measured on and how, and how the disruption to their workflow will result in improving practice and financial reimbursement.

The third step is to provide education and guidance. “That’s us laying out the plans, training the team for the changes that are going to come about through the infusion of new algorithms and rules into their workflow, into the technology and into the way we’re going to measure that activity,” he said.

Cousin said it’s critical to accomplish the first three steps before moving on to the fourth step: launching a value-based care analytics program. For Mayo Clinic Laboratories, analytics are used to measure changes in laboratory test ordering and assess changes in the elimination of wasteful and unnecessary testing.

The fifth and final step focuses on alternative payments and collaboration with healthcare insurers, which Cousin described as one of the biggest challenges in value-based care. The new model requires a new kind of language that the payers may not yet speak.

Mayo Clinic Laboratories has attempted to address this challenge by taking its data and making it as understandable to payers as possible, essentially translating clinical data into claims data.     

Cousin gave the example of showing payers how much money was saved by intervening in over-ordering of tests. Presenting data as cost savings can be more valuable than documenting how many units of laboratory tests ordered it eliminated, he said.

How a healthcare CIO approaches value-based care

UT Health Austin’s Miri approaches value-based care from both the academic and the clinical side. UT Health Austin functions as the clinical side of Dell Medical School.

Aaron Miri, CIO at the University of Texas at Austin Dell Medical School and UT Health Austin Aaron Miri

The transition to value-based care in the clinical setting started with a couple of elements. Miri said, first and foremost, healthcare CIOs will need buy-in at the top. They also will need to start simple. At UT Health Austin, simple meant introducing a new patient-reported outcomes program, which aims to collect data from patients about their personal health views.

UT Health Austin has partnered with Austin-based Ascension Healthcare to collect patient reported outcomes as well as social determinants of health, or a patient’s lifestyle data. Both patient reported outcomes and social determinants of health “make up the pillars of value-based care,” Miri said.  

The effort is already showing results, such as a 21% improvement in the hip disability and osteoarthritis outcome score and a 29% improvement in the knee injury and osteoarthritis outcome score. Miri said the organization is seeing improvement because the organization is being more proactive about patient outcomes both before and after discharge.  

For the program to work, Miri and his team needs to make the right data available for seamless care coordination. That means making sure proper data use agreements are established between all UT campuses, as well as with other health systems in Austin.   

Value-based care data enables UT Health Austin to “produce those outcomes in a ready way and demonstrate that back to the payers and the patients that they’re actually getting better,” he said.

In the academic setting at Dell Medical School, Miri said the next generations of providers are being prepared for a value-based care world.

“We offer a dual master’s track academically … to teach and integrate value-based care principles into the medical school curriculum,” Miri said. “So we are graduating students — future physicians, future surgeons, future clinicians — with value-based at the core of their basic medical school preparatory work.”

Go to Original Article
Author:

Create and configure a shielded VM in Hyper-V

Creating a shielded VM to protect your data is a relatively straightforward process that consists of a few simple steps and PowerShell commands.

A shielded VM depends on a dedicated server separate from the Hyper-V host that runs the Host Guardian Service (HGS). The HGS server must not be domain-joined because it is going to take on the role of a special-purpose domain controller. To install HGS, open an administrative PowerShell window and run this command:

Install-WindowsFeature -Name HostGuardianServiceRole -Restart

Once the server reboots, create the required domain. Here, the password is [email protected] and the domain name is PoseyHGS.net. Create the domain by entering these commands:

$AdminPassword = ConvertTo-SecureString -AsPlainText ‘[email protected]’ -Force

Install-HgsServer -HgsDomainName ‘PoseyHGS.net’ -SafeModeAdministratorPassword $AdminPassword -Restart

Install the HGS server.
Figure A. This is how to install the Host Guardian Service server.

The next step in the process of creating and configuring a shielded VM is to create two certificates: an encryption certificate and a signing certificate. In production, you must use certificates from a trusted certificate authority. In a lab environment, you can use self-signed certificates, such as those used in the example below. To create these certificates, use the following commands:

$CertificatePassword = ConvertTo-SecureString -AsPlainText ‘[email protected]’ -Force
$SigningCert = New-SelfSignedCertificate -DNSName “signing.poseyhgs.net”
Export-PfxCertificate -Cert $SigningCert -Password $CertificatePassword -FilePath ‘c:CertsSigningCert.pfx’
$EncryptionCert=New-SelfSignedCertificate -DNSName “encryption.poseyhgs.net”
Export-PfxCertificate -Cert $EncryptionCert -Password $CertificatePassword -FilePath ‘C:certsEncryptionCert.pfx’

Create the certificates.
Figure B. This is how to create the required certificates.

Now, it’s time to initialize the HGS server. To perform the initialization process, use the following command:

Initialize-HGSServer -HGSServiceName ‘hgs’ -SigningCertificatePath ‘C:certsSigningCert.pfx’ -SigningCertificatePassword $CertificatePassword -EncryptionCertificatePath ‘C:certsEncryptionCert.pfx’ -EncryptionCertificatePassword $CertificatePassword -TrustTPM

The initialization process
Figure C. This is what the installation process looks like.

The last thing you need to do when provisioning the HGS server is to set up conditional domain name service (DNS) forwarding. To do so, use the following commands:

Add-DnsServerConditionalForwardZone -Name “PoseyHDS.net” -ReplicationScope “Forest” -MasterServers

Netdom trust PoseyHDS.net /domain:PoseyHDS.net /userD:PoseyHDS.netAdministrator /password: /add

In the process of creating and configuring a shielded VM, the next step is to add the guarded Hyper-V host to the Active Directory (AD) domain that you just created. You must create a global AD security group called GuardedHosts. You must also set up conditional DNS forwarding on the host so the host can find the domain controller.

Once all of that is complete, retrieve the security identifier (SID) for the GuardedHosts group, and then add that SID to the HGS attestation host group. From the domain controller, enter the following command to retrieve the group’s SID:

Get-ADGroup “GuardedHosts” | Select-Object SID

Once you know the SID, run this command on the HGS server:

Add-HgsAttestationHostGroup -Name “GuardedHosts” -Identifier “

Now, it’s time to create a code integrity policy on the Hyper-V server. To do so, enter the following commands:

New-CIPPolicy -Level FilePublisher -Fallback Hash -FilePath ‘C:PolicyHWLCodeIntegrity.xml’

ConvertFrom-CIPolicy -XMLFilePath ‘C:PolicyHwlCodeIntegrity.xml’ -BinaryFilePath ‘C:PolicyHWLCodeIntegrity.p7b’

Now, you must copy the P7B file you just created to the HGS server. From there, run this command:

Add-HGSAttestationCIPolicy -Path ‘C:HWLCodeIntegrity.p7b’ -Name ‘StdGuardHost’

Get-HGSServer

At this point, the server should display an attestation URL and a key protection URL. Be sure to make note of both of these URLs. Now, go back to the Hyper-V host and enter this command:

Set-HGSClientConfiguration -KeyProtectionServerURL “” -AttestationServerURL “

To wrap things up on the Hyper-V server, retrieve an XML file from the HGS server and import it. You must also define the host’s HGS guardian. Here are the commands to do so:

Invoke-WebRequest “/service/metadata/2014-07/metadata.xml” -OutFile ‘C:certsmetadata.xml’

Import-HGSGuardian -Path ‘C:certsmetadata.xml’ -Name ‘PoseyHGS’ -AllowUntrustedRoot

Shield a Hyper-V VM.
Figure D. Shield a Hyper-V VM by selecting a single checkbox.

Once you import the host guardian into the Hyper-V server, you can use PowerShell to configure a shielded VM. However, you can also enable shielding directly through the Hyper-V Manager by selecting the Enable Shielding checkbox on the VM’s Settings screen, as shown in Figure D above.

Mirai creators and operators plead guilty to federal charges

The three men accused of creating and operating the Mirai botnet have pleaded guilty to federal charges.

The Department of Justice announced Wednesday it had unsealed the guilty pleas of Paras Jha, age 21, of Fanwood, N.J.; Josiah White, 20, of Washington, Pa.; and Dalton Norman, 21, of Metairie, La. on charges of “conspiracy to violate the Computer Fraud and Abuse Act in operating the Mirai botnet.”  

According to the DoJ, the three Mirai creators built the botnet during the summer and fall of 2016 before unleashing the first wave of Mirai attacks, which at its peak was generating DDoS attacks from hundreds of thousands of vulnerable IoT devices.

“The defendants used the botnet to conduct a number of powerful distributed denial-of-service, or ‘DDoS’ attacks, which occur when multiple computers, acting in unison, flood the Internet connection of a targeted computer or computers,” the DoJ wrote in a statement. “The defendants’ involvement with the original Mirai variant ended in the fall of 2016, when Jha posted the source code for Mirai on a criminal forum. Since then, other criminal actors have used Mirai variants in a variety of other attacks.”

Jha and Norman were separately charged with and pleaded guilty to infecting more than 100,000 devices between Dec. 2016 and Feb. 2017 with “malicious software,” but did not specifically attribute these attacks to Mirai The DoJ announcement accused the Mirai creators with making a botnet “used primarily in advertising fraud, including ‘click fraud’ … for the purpose of artificially generating revenue,”and it is unclear if this botnet was separate from Mirai or not..

“Our world has become increasingly digital, and increasingly complex,” U.S. Attorney Bryan D. Schroder said in the DoJ statement. “Cybercriminals are not concerned with borders between states or nations, but should be on notice that they will be held accountable in Alaska when they victimize Alaskans in order to perpetrate criminal schemes. The U.S. Attorney’s Office, along with our partners at the FBI and Department of Justice’s Computer Crime and Intellectual Property Section, are committed to finding these criminals, interrupting their networks, and holding them accountable.”

Jha alone also pleaded guilty to a series of attacks against the Rutgers University network — where Jha was a student — between Nov. 2014 and Sept. 2016.

Mirai creator attribution

Early reports following the Mirai botnet attacks, including the Dyn DDoS incident, attempted to attribute the attack to nation-state actors and foreign adversaries. However, in January 2017 Brian Krebs, cybersecurity journalist and investigator, identified Jha and White as likely being the Mirai creators. It is unclear how his investigation played a part in the DoJ charges. Krebs was one of the first known victims of the Mirai DDoS attacks.

Lesley Carhart, security incident response team lead at Motorola Solutions, said on Twitter that this case against the Mirai creators should be a moment to realize “attribution is complex.”

AU combines talent analytics with HR management

The use of talent analytics may be creating a need for HR staff with specialized training. One source for these skills is programs that offer master’s degrees in analytics. Another may be a new program at American University that combines analytics with HR management.

American University, or AU, is making talent analytics, which is also called people analytics, a core part of the HR management training in a new master’s degree program, said Robert Stokes, the director of the Master of Science in human resource analytics and management at AU.

Stokes said he believes AU’s master’s degree program is unique, “because metrics and analytics run through all the courses.” He said metrics are a part of that training in talent management, compliance and risk reduction, to name a few HR focus areas.

Programs that offer a master’s degree in analytics are relatively new. The first school to offer this degree was North Carolina State University in 2007. Now, more than two dozen schools offer similar programs. There are colleges that offer talent analytics training, but usually as a course in an HR program.

These master’s programs produce graduates who can meet a broad range of business analytics needs, including talent analytics.

“We definitely have interest from companies in hiring our students for their HR departments,” said Joel Sokol, the director of the Master of Science in analytics program at the Georgia Institute of Technology.  “It’s not the highest-demand business function that our students go into, of course, but it’s certainly on the list,” he said in an email.

Sokol also pointed out that one of the program’s advisory board members is a vice president of HR at AT&T.

Analytics runs through all of HR

The demand for analytics-trained graduates is high. North Carolina State, for instance, said 93% of its master’s students were employed at graduation and earned an average base salary of just over $95,000.

Interest in master’s degree analytics training follows the rise of business analytics. The interest in employing people with quantitative talent analytics skills is part of this trend.

What HR organizations are trying to do is discover “how to drive value from people data,” said David Mallon, the head of research for Bersin by Deloitte, headquartered in New York.

“It wouldn’t shock anybody” if a person from supply chain, IT or marketing “brought a lot of data to the table; it’s just how they get things done,” Mallon said. “But in most organizations, it would be somewhat shocking if the HR person brought data to the conversation,” he said.

Mallon said he is seeing clear traction by HR departments — backed up by its just-released research on people analytics maturity — to deliver better analysis. But he said only about 20% are doing new and different things with analytics.  “They have data scientists, they have analytics teams, [and] they’re using new kinds of technologies to capture data, to model data,” he said.

The march to people analytics

“Conservatively, our data shows that at least 44% of HR organizations have an HR [or] people analytics team of some kind,” Mallon said. The percentage of departments with at least someone responsible for it — even part time — may be as high as 67%, he said.

The AU program’s first class this fall has about 10 students, and Stokes said he expects it to grow as word about the program spreads. Most HR programs that provide analytics training do so under separate courses that may not be integrated with the broader HR training, he said.

The intent is to use analytics and metrics to measure and make better decisions, Stokes said. An organization, for instance, should be able to quantify how much fiscal value is delivered by a training program. This type of people analytics may still be new to many HR organizations, which may rely on surveys to assess the effectiveness of a training program.

Organizations that are more mature aren’t just using surveys to try to determine employee engagement, Mallon said. They may be analyzing what’s going on in internal and external social media.

“They’re mining — they’re watching the interactions of employees in collaboration platforms and on your intranet,” Mallon said. “They’re bringing in performance data from existing business systems like ERPs and CRMs,” he said.

The best-performing organizations are using automation and machine learning to handle the routine reporting to free up time for higher-value research, Mallon said. But they are also using these tools “to spot trends that they didn’t even know were there,” he said.

Cloud computing technology for 2018: Transform or die

Cloud computing technology is creating business opportunities so radically new and different that they can be built only if we junk much of what we know, how we operate and even how we think — everywhere in the enterprise, not just within IT. In other words, transform or die.

That was the emphatic, no-nonsense message delivered by Ashish Mohindroo, vice president of Oracle Cloud, and Bill Taylor, co-founder and founding editor of Fast Company magazine. They spoke at the Boston stop of the 2017-2018 Oracle Cloud Day roadshow in November.

Legacy data centers won’t help, said Mohindroo. Neither will recreating on-premises complexity in the cloud. It’s time to think in new ways, as is typified by Uber and Lyft redefining transportation and Airbnb transforming the hospitality industry.

Bill Taylor at Oracle Cloud Day

During a time of disruption, don’t let what you know limit what you can imagine, warned Taylor, giving a combination of scared-straight and do-it-now-or-else advice to an audience of about 400 IT professionals.

Generational shift

IT is currently in the midst of a once-every-20-years tectonic shift, according to Mohindroo. The most recent, the 1990s shift from client/server computing to the internet, is now being supplanted by the transition to cloud computing. The upheaval is far-reaching and impossible to avoid.

“No industry is immune,” Mohindroo said, citing key cloud computing technology drivers that include artificial intelligence, machine learning, blockchain, autonomous software, the internet of things and advances in human interface design.

A potentially debilitating problem that businesses face today is that existing legacy IT infrastructures and strategies were not built to leverage new technologies, support new business models, offer adequate control and do it all quickly. Traditional data centers, Mohindroo said, were constructed in a siloed manner, built for maximum capacity and peak loads, but not designed to be elastic, integrated or flexible.

Complicating matters is that each siloed service doesn’t talk to others and may have been built to differing standards. Integrating them can be difficult when incompatible standards, including authentication, database design or communications protocols, get in the way.

Though Mohindroo’s presentation eventually led into a sales pitch for Oracle’s cloud computing technology platforms, the underlying message was vendor neutral and clear: For businesses to exist, they must undergo a cloud transformation consisting of essential foundational services: data as a service (DaaS), software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Those services, he said, need to be based on open technologies and standards, including SQL and NoSQL databases.

Six journey paths

Oracle defines six distinct pathways into the cloud. Each offers differing appeal depending on the age of the company, its compute workload and compliance mandates, among other factors. The six options include the following:

  • Optimize an existing on-premises data center with plans to migrate later.
  • Install a complete cloud infrastructure on premises behind the corporate firewall. The advantages of this are behind-the-firewall security and a pay-as-you-go model for usage.
  • Move existing workloads into a cloud infrastructure with minimal optimization, often referred to as lift and shift. Mohindroo said the key challenge with this popular scenario is dealing with less-than-optimal I/O bottlenecks.
  • Create all new, cloud-resident applications, developed using PaaS and IaaS technology, to fully replace outmoded legacy applications. DaaS replaces the legacy on-premises database. Advantages of this model include the availability of a wide variety of open source languages and services for application development, data management, analytics and integration, along with support for virtual machines, containerization for portability and Kubernetes for orchestration.

    “The whole concept behind this is to make it easy for you to run your business,” Mohindroo said.

    One way to utilize this option is through Oracle’s advanced AI and machine learning cloud technology. For example, Oracle offers an autonomous database that Mohindroo claims is self-running — managed, patched and tuned in real time without human intervention.

  • Replace the core legacy application base with subscription-based, third-party SaaS counterparts. Similar to option four, this model offers application development tools for customization, along with the same AI and machine learning technology.
  • Choose a born-in-the-cloud model, which would be the logical choice for new companies that have no legacy IT operation or applications, Mohindroo said.

Change the way you think

Mohindroo’s presentation was crafted to deliver a purely cloud computing technology message.

Taylor’s talk, which largely avoided tech speak, still targeted IT managers, application developers and operations personnel, saying their collective efforts can benefit from understanding the human side of the user experience. To do that, he said, requires becoming fully immersed in every nuance of what it means to be a customer.

Taylor suggested that IT employees expand their view beyond the technology.

Are you … learning as fast as the world is changing?
Bill Taylorcofounder and founding editor, Fast Company magazine

“Are you determined to make sure that what you know doesn’t limit what you can imagine going forward?” he said. “Are you … learning as fast as the world is changing?”

Taylor’s message can be taken two ways: Gain insight into the people who use the cloud applications you build or learn about each new cloud computing technology and programming language or risk being left behind.

Taylor cited San Antonio-based USAA, the financial services company that serves military families, as an example of a leader in technology-driven disruption that immerses every employee — even highly skilled application developers — in understanding the customer experience. USAA gives new employees a packet called a virtual overseas deployment. The idea is to spend a day role-playing as a member of the Army Reserve or National Guard suddenly called up to active duty.

“You’ve got four weeks to get your financial affairs together,” Taylor said.

The exercise forces the role-player to go through credit card statements, bank statements, life insurance and car payments — all to help USAA employees understand what their customers need.

“They’re not early adopters of technology because they love technology per se; it’s because they’re so committed to their identity in the sense of impacting customers in their marketplace,” Taylor said. 

Emaar digitally transforms The Dubai Mall for a bold, new retail experience of the future – Transform

Emaar Properties is a renowned real estate developer known for creating iconic monuments, attractions and premium lifestyle communities in Dubai, United Arab Emirates. One of Emaar’s most famous attractions is The Dubai Mall, the world’s largest shopping and entertainment destination with 1,200 retailers, 80 million annual visitors and 12.1 million square feet of shops, restaurants, movies and hotel amenities.

But The Dubai Mall is becoming something even more: A retail experience of the future, with a new digital nervous system that connects people, anticipates their needs and curates memorable experiences.

As more people gravitate toward online shopping, Emaar is using modern technologies to transform brick-and-mortar retail at the mall into engaging experiences that go beyond product fulfillment.

With Microsoft Azure, Dynamics and Power BI, Emaar can collect and analyze more than 10,000 points of telemetry at the mall to better understand its customers, predict their needs and desires, and proactively deliver tailored experiences through Emaar’s entire ecosystem. The advanced analytics and data insights are also helping Emaar’s retailers reach more customers and serve them better.

The end result is more engagement, community and social connections at The Dubai Mall – and a big step toward Emaar’s vision for a bold, dynamic future in Dubai.