Tag Archives: 1998

Health Tech Podcast: How AI is making humans the ‘fundamental thing in the internet of things’

BigStock Photo

Twenty years ago, in 1998, you would have been hard-pressed to find a single hospital room with a personal computer in it.

Patient files were kept in filing cabinets. Prescriptions were written by hand. The Human Genome Project was still just halfway through a years-long, multi-billion-dollar effort to sequence the DNA of the human race. In short, there wasn’t a huge abundance of data on our health.

Today, electronic medical records and advanced genetic sequencing have completely changed the landscape — and brought challenges and opportunities that are almost impossible for humans to tackle on their own. That’s where artificial intelligence steps in.

AI is finding fertile ground for growth in hospitals and medical labs around the world, promising to give human health a boost as it addresses everything from preventing heart attacks to revolutionizing how we diagnose diseases. That interplay is the topic of our most recent Health Tech Podcast, and if the experts are to be believed, it’s just the beginning.

“I see this possibility of precision health, where people are the most fundamental thing in the Internet of Things,” said Peter Lee, a corporate vice president at Microsoft who leads the company’s NExT program. “When we’re looking 10 years out, that sort of precision in diagnosis and treatment, I think, can be incredibly powerful.”

Peter Lee, a corporate vice president at Microsoft, leads the company’s efforts in applying AI technology to healthcare. (GeekWire Photo / Clare McGrane)

Lee works with industry partners to implement cutting-edge technologies, including AI, and his team started looking into health applications of AI about a year ago.

“Honestly, that feels a little bit like being thrown into the middle of the Pacific Ocean and asked to find land, because healthcare is just such a huge, huge space,” Lee said. “But as time has gone on, over the past year, we’ve really gotten completely sucked into it and we’re pretty excited.”

Ankur Teredesai, a longtime University of Washington data scientist and co-founder and CTO of health AI startup KenSci, started studying artificial intelligence twenty years ago, back in 1998. At that point, the technology was still in its infancy, much as health data was at the time.

“Just this abstract concept of intelligence, which could be derived from a computer program, was fascinating to me,” Teredesai said. He went on to found the Center for Data Science at UW Tacoma in 2010 and saw a huge opportunity in health.

“There was a ready availability of data emerging” in health, he said, “but there were hardly any data scientists that were looking at solving big problems in this space.”

KenSci now works on AI that predicts which patients will get sick and helps hospitals intervene early.

The confluence of huge amounts of data, advancing AI technology and the many challenges facing healthcare in the U.S. puts the industry in a unique space, ready for a new way of solving problems.

One issue Microsoft NExT is taking on is personalized medicine, treatments or other health actions that are tailored to an individual based on genes and other health data.

Lee said precision medicine is interesting to Microsoft for two reasons.

One is: Precision medicine still depends — a lot — on fundamental research and especially research in AI and machine learning,” he said, “and two, the computing workloads are really very data dependent and typically involve very large volumes of data.”

Microsoft just announced a new, multi-hundred million dollar precision medicine project with Adaptive Biotechnologies, a Seattle company that specializes in sequencing the genes of immune cells. Microsoft will help Adaptive build a machine learning program that can scan those genes and use the data to figure out what diseases someone might have, potentially months or years before they show symptoms.

KenSci co-founders Ankur Teredesai (left) and Samir Manjure. (KenSci Photo)

Microsoft is also working on a patient-facing AI chatbot that will help people navigate their healthcare and insurance benefits — and although those two projects seem totally different on their face, Lee says the technology behind them is actually very similar.

Introducing GeekWire’s Health Tech podcast, exploring the frontiers of digital health

“In a way, both the health bot technology and what we’re doing with Adaptive — from an AI perspective — have common roots in what we do today in machine learning for language processing,” he said.

The health bot is rooted in language because it needs to have a natural language interface that can hold a conversation with a user. It turns out the biotech project with Adaptive is also a language problem.

“You have these antigens that are indicative of some disease state in your body. Those antigens are like words that are telling some story about what’s going on in your body. The T-cell receptors that are part of your immune system are like a translation of those words into a new language,” Lee explained. Adaptive’s technology lets them read and understand the genetics of a T-cell.

“From an AI perspective, what we’re trying to do is use machine learning to do the language translation from the T-cell receptors back to the antigens so that we can understand what your body is saying,” Lee said.

Teredesai and KenSci are working on a different kind of artificial intelligence problem: Predicting future events given a past history.

KenSci’s technology uses patient data from electronic medical records to do a few things, chief among them predicting which patients will get chronic diseases.

“A chronic condition patient — a patient who is a diabetic, or has an episode of heart failure — often starts off as a patient that is normal,” Teredesai said.

“They are able to take care of themselves, they are [at] very low risk of mortality. And it is the small details in their daily lives that — if they manage properly — they can have a very, very successful life that is pain-free, disease free and leads to a desirable outcome where mortality can be managed, to a great extent,” he said.

KenSci also works on tools that help hospitals see patterns in how patients are faring. It might help a hospital change certain policies or give caregivers new training to improve health outcomes, for example.

Teredesai also emphasized that KenSci’s products — and all AI programs — are not replacements for humans in the health system. He actually prefers to call AI “augmented intelligence” or “assistive intelligence,” making the point that it must work in tandem with doctors, “rather than act like death robots that are controlling the entire healthcare ecosystem,” he said.

Lee also raised a point of concern about AI in the health field, namely its reliance on correlations that make up a pattern.

“Medicine is, properly, not based on correlations, but is based on causal relationships — and it needs to be that way. That’s why medical research is really founded on ideas about having controlled experiments, about really understanding statistical significance and really being very wary of making decisions based only on statistical correlations,” Lee said.

“So there’s a gap right now between what we are actually doing today in the world with machine learning and AI and what medical science has always been based on,” he said. That gap must be closed if AI is to reach its full potential in the health world.

And Lee predicts a bold and successful future for AI in health. At the moment, he says, AI is helping build one-off clinical tools that are useful in a certain situation.

“What I see coming after that is something that really would be enabled by a much broader look across large populations, to look at longitudinal patient records for millions or even hundreds of millions of people and understand them in ways that can be actionable by clinicians and by healthcare organizations,” he said.

“If you were looking 10 years out, I see this possibility of precision health where people are the most fundamental thing in the Internet of Things,” he said. “It’s not just your Fitbit but it’s your genome, it’s your activities all day every day.”

“It’s where you live, it’s who is living around you, what you’re eating… those things are creating a kind of digital avatar that can virtually see an intelligent doctor. Maybe every day, or even every hour.”

He compared the idea to the way the health of cars is monitored today: If something goes wrong in the car, sensors discover it and notify the driver before he or she can even tell there’s a problem. The same could be done with different health monitoring systems for someone’s body.

“Today, health care is 95 percent about people and chemistry and drugs and so on and 5 percent compute, and that gets to a world where health care really starts to flip. It will approach something more like 5 percent that stuff and 95 percent compute,” Lee said.

It may be a decade or two before we carry virtual, AI doctors in our pockets, but it’s clear AI is already having a stark impact on the health world.

Machine learning’s training data is a security vulnerability

For as long as there has been an internet, there has been data manipulation. In 1998, Sergey Brin and Larry Page launched Google as a search engine that was designed to get rid of the junk in search results. But the more sophisticated system soon gave rise to more sophisticated data manipulation methods, including Google bombing — the use of crowdsourcing to bias search results.

History shows that the data manipulation methods used by adversarial actors evolve in lockstep with the technology (see sidebar). That’s a fact that should get CIOs’ attention, according to data expert Danah Boyd, especially as companies increasingly use machine learning to power predictive features in their applications. Machine learning requires training data — a lot of it — to get the algorithms working correctly, and one popular data resource used by developers is the internet.

“I’m watching countless actors trying to develop new, strategic ways to purposefully mess with systems with an eye on messing with the training data that all of you use,” said Boyd, a researcher at Microsoft and a presenter at the recent Strata Data Conference in New York. “They are trying to fly below the radar. And if you don’t have a structure in place for strategically grappling with how those with an agenda might try to route your best laid plans, you’re vulnerable.”

To stand up to quickly evolving changes in the threat landscape, Boyd proposed companies return to the old-school practice of rigorous testing and find ways to inject what’s known as adversarial thinking into the design and development process. That includes hiring what Boyd called “white hat trolls.”

Training data is at risk

The manipulation of data by bad actors is nothing new, Boyd said, citing the relatively benign example that became known as “Rickrolling.” In 2007, pranksters disguised hyperlinks to trick people into watching English singer and songwriter Rick Astley’s 1978 hit music video, “Never Gonna Give You Up.”

While Rickrolling was entertaining, Boyd said the methods behind its success have served as the basis for more nefarious data manipulation. The prank not only taught people how to manipulate systems, but also showed the strategic benefit of going viral and was the antecedent to the disinformation campaigns of the 2016 presidential election.

Pizzagate, the debunked conspiracy theory that linked Hillary Clinton’s presidential campaign to human trafficking and child pornography, for example, required a distributed network of dummy accounts, known as sock puppets, to bait journalists into reporting on it, Boyd said.

But the data manipulation methods are about to get a lot more sophisticated. Up until now, the manipulation of algorithmic systems relied on manual methods. With the onset of machine learning, that’s about to change, according to Boyd.

To tune machine learning algorithms, developers often turn to the internet for training data — it is, after all, a virtual treasure trove of the stuff. Open APIs from Twitter and Reddit, for example, are popular training data resources. Developers scrub them of problematic content and language, but the data-cleansing techniques are no match for the methods used by adversarial actors, according to Boyd.

Nicolas Papernot, a computer science and engineering grad student at Penn State University, published a paper last year on his experiments with computer vision. He altered images of stop signs so the neural nets saw them as yield signs. Here’s the detail that should make CIOs nervous: The changes could not be detected by human eyes.

The most successful injection attacks on machine learning models are happening in research, but the methods, which are evolving, will no doubt be aimed at mainstream models soon, according to Boyd. “It’s time we started building technical antibodies,” she said.

Needed: Adversarial thinking

One antibody is to return to what Boyd called “a culture of test.” Today, the technology industry often relies on A/B testing, or what Boyd described as “the perpetual beta,” essentially turning customers into a quality-assurance (QA) department. But when members of the QA department are also looking to expose bugs in the system for their own gain, CIOs might need to rethink the process.

“QA wasn’t simply about finding bugs,” Boyd said. “It was also about integrating adversarial thinking into the design and development process. That’s a lot of what we lost.”

She pointed to cutting-edge researchers who are doing just that — building adversarial thinking into the development of machine learning systems. Generative adversarial networks (GANs) are one example. GANs use two unsupervised neural nets. One generates data that looks like the data it was trained on; the other discriminates between generated data and real data. The two neural nets operate a zero-sum game until the generated data is indistinguishable from the real data.

But GANs don’t go far enough, according to Boyd. “We need to actively and intentionally build a culture of adversarial testing, auditing and learning into our development practice,” she said. ‘We need to build analytic approaches to accept the biases of any data set we use. And we need to build tools to monitor how the systems evolve with as much effort as we build the models in the first place.”

And CIOs might want to go one step further and take a page out of Matt Goerzen’s book: Invite white hat trolls to mess with systems and help companies understand system vulnerabilities — just like companies do with white hat hackers.

“We no longer have the luxury of only thinking about the world we want to build,” Boyd said. “We must also start thinking about how others might start to manipulate our systems [and] undermine our technologies with an eye to doing harm and causing chaos.”