Forty years after it was built, the mainframe responsible for Medicare Part A claims is going to be moving to the cloud, at least in part because of anticipated value-based care needs.
The United States Digital Service (USDS) was asked by the Centers for Medicare & Medicaid Services (CMS) to bring its oldest and arguably most complicated system into the 21st Century. Sweeping changes to a healthcare data center are never for the faint of heart, but this project is especially challenging, said Shannon Sartin, executive director of the USDS. “This is our most technically hairy project and it processes claims worth 4% of the [U.S.] GDP,” which points to the massive data volume involved, Sartin said. “This has an impact on the way we pay for the care of the world, and we need to bring it to the point [where it can handle] value-based care.”
At a time when both providers and payers are struggling with the harsh realities of moving complex and regulated legacy systems to the cloud, the approach the USDS is taking with the Medicare healthcare data center could prove a useful guide.
Shannon Sartinexecutive director, U.S. Digital Service
Founded in 2014, the USDS is a self-described “startup at the White House” and is made up of 150 to 200 private sector technologists who serve terms ranging from 18 months to four years. The group has tackled tech transformations at healthcare.gov, vets.gov and the IRS, among many others. Earlier this year, the USDS rolled out Blue Button 2.0, an API designed to make Medicare claims interoperable. To date, Blue Button 2.0 is in production in 10 applications, and over 1,100 developers are experimenting with it.
Project involves 10 million lines of code
Now the team has turned its attention to practically the mother of all healthcare data center challenges: a computer that runs 8 million lines of COBOL and close to 2 million lines of a customized version of the programming language Assembly. In other words, it’s complicated, said Scott Haselton, a digital service expert at the USDS and the lead engineer on this project.
“Medicare pays its claims out of a series of four separate systems,” he explained. “Medicare asked us to take at look at one of the oldest, and it has a lot of issues around general maintenance due to its age, rigor and fragility.” And even if age wasn’t a factor, a mainframe isn’t the best choice for the move to value-based care, Haselton said. “Taking into account we’ve got budding value-based payments coming out of CMS, if you do a lot of [modernization], it decreases the risk you won’t pull everything off.”
So, the move to the cloud has begun, but slowly. “Our approach is to take very slow, iterative steps,” Haselton said. “CMS has tried modernization approaches in the past that were a ‘big bang,’ where you turn off one system and turn on another and hopefully everything works. Our approach this time is to take small steps where we have small deltas between changes and we can feel good about the management and security and sanity.”
To get started, Haselton and his team needed a small, low-risk piece of code that wouldn’t bring the entire healthcare data center down if things went badly. They chose to use a module that handles certain Medicare payments to inpatient rehab centers because the code was little more than a mathematical equation that didn’t require calls to a database or anything else complicated. “This is a well-defined, very stable and very simplistic piece of code,” he said, which made it easier to debug.
Sandbox environment lets developers verify
So far the strategy is working. The USDS team set up a sandbox, which is a software developer “safe zone” for experimentation, and installed the cloud next to the mainframe in the healthcare data center. A commercial tool translated the COBOL to the more mainstream Java programming language and then refactored the module to function like a modern application would. After that transition, the mainframe was able to make the call to the API in the cloud, so the next step is to move this out of the sandbox and in to production, Haselton said.
But the pace will remain deliberate. “We’re going to look for more modules that are self contained and work with those until we have a proof of concept,” he said. “Once we’ve done that a couple of times, we’re going to look for bigger and uglier pieces of code.”
Haselton didn’t have an estimate for when the transition will be complete. “The big thing is really trying to establish a process around how to make changes and create a playbook. The goal is five to 10 years down the line, CMS will be managing this process with five to 10 years of history on the changes they can refer to.”