AI in Healthcare: Lessons Learned From Moffitt Cancer Center, Mayo Clinic

Stan Gloss, founding partner at BioTeam, interviewed Ross Mitchell about how he has applied AI to healthcare and medicine over the course of his career. This article was originally published by Bio-IT World in August.

Until recently, Ross Mitchell served as the inaugural AI Officer at the Moffitt Cancer Center in Tampa, Florida. Now he’s offering consulting expertise borne of a long career applying AI to healthcare and health systems.

Mitchell’s experience dates back to his undergraduate days. While pursuing his degree in computer science in Canada, he had a co-op work term at the local cancer center. “That was just a fluke how that placement worked out, but it got me interested in applying computer science to medicine. This was in the ’80s,” Mitchell says.

He did a master’s degree at the cancer clinic—”very technical work with computer hardware and oscilloscopes and low-level programming”—but his lab was right next to the radiotherapy waiting room.

“You got to see the same people in the clinic. Their treatments were fractionated over many days in many doses. You got to see them again and again, over many weeks and see them progress or decline,” he remembers.

That front row seat to their long treatment plans solidified Mitchell’s interest in using computation to improve medicine—a goal he’s pursued ever since: first with a PhD in medical biophysics, spinning a company out of his lab and serving as co-founder and Chief Scientist, and then joining the Mayo Clinic in Arizona to build a program in medical imaging informatics.

In 2019, he took the role of inaugural AI officer at Moffitt Cancer Center, and in 2021, he began working as an independent consultant to help other organizations apply AI to healthcare. Mitchell recently sat down with Stan Gloss, founding partner at BioTeam, to discuss the practical knowledge he’s gathered as he has applied AI to medicine over the course of his career. Bio-IT World was invited to listen in.

Editor’s Note: Trends from the Trenches is a regular column from the BioTeam, offering a peek behind the curtain of some of their most interesting case studies and projects at the intersection of science and technology.

Stan Gloss: I’ve never heard of an AI officer. Can you tell me what that role is in an organization?

Ross Mitchell: It depends on the organization. More and more organizations in healthcare are developing roles like this. The role is not yet really well defined. It depends a lot on the organization: what they want to do and where they want to go. Nominally, it would be someone who’s going to guide and direct the development of AI and related technologies at the organization.

It’s significantly different than what people are calling a chief digital officer, right?

Moffitt recently hired a chief digital officer as a new role. Digital, by necessity, involves a lot of AI. There’s not an exact, but significant overlap between what you do related to digital technology in a healthcare organization and AI. The best way now to process a lot of that digital data and analyze it and turn it into actionable information involves the use of AI at some point in the chain.

When you look at analysis options, what’s the difference between deep learning and machine learning?

The broad field of AI encompasses a number of subfields. Robotics is one. Another area is machine learning.

Machine learning, many years ago, back in the ’70s and the ’80s, was what we did to make computers perform intelligent-appearing activity by developing sets of rules. You would try and think of all the possible conditions that you might encounter, let’s say in a patient’s electronic health record in a hospital, and develop a series of rules.

A doctor’s work generally follows simple algorithms: if a patient meets a set of conditions, it would trigger an action or a response. You could code up those rules. Imagine a series of if-then-else statements in your computer and it would monitor things like blood pressure and temperature. If those things got out of whack, it might trigger an alert that would say, “We think this patient is going septic. Go take a look at them.”

Those rule-based systems have been around for several decades, and they work well on simple problems. But when the issue gets complex, it’s really difficult to maintain a rule base. I remember many years ago trying to build commercial applications using rule-based systems and it was hard.

What ends up happening is rules conflict with each other. You change one and it has a ripple effect through some of the other rules and you find out that you’re in a situation where both conditions can’t be true at once and yet, that’s what the system relies on. Rule-based systems were brittle to maintain and complicated to build and they tended not to have great performance on really complex issues, but they were fine for simpler issues.

In the ’70s and ’80s, when the earliest machine learning came along, the machine learned to identify patterns by looking at data. Instead of having a person sit down and say, “When the blood gasses get above a certain level or the body temperature gets above a certain level and the heart rate gets below a certain level, do something.”, you would present lots and lots of that data along with outcomes and the machine would look for patterns in the data and learn to associate those patterns with different outcomes. Learning from the data is what machine learning is all about.

In the early ’80s, one of the popular ways to learn from the data was to use neural networks, which mimic networks in mammalian brains, A lot of the foundational work was done by Geoff Hinton, a neuroscientist in Toronto, Canada. He was interested in figuring out how the brain worked. While building synthetic circuits in the computer to simulate what was going on, he developed some of the fundamental algorithms that let us, as scientists, train these networks and have them learn by observing data.

Deep learning is a subspecialty of machine learning. To recap: you’ve got AI, which has a subspecialty of machine learning, which, in turn, has a subspecialty of deep learning. Deep learning is just using neural type architectures to learn patterns from data as opposed to something like a random forest algorithm or something like that.

What would be a good use of deep learning over machine learning? How would you use it differently?

I use both all the time. Deep learning is particularly effective under certain conditions. For example, when you have a large amount of data. The more data you have, the better these deep learning algorithms tend to work, because they just use brute force to look for patterns in the data.

Another important factor, as implied by “brute force”, is computational power. You really need computational power. That has been growing exponentially for over 20 years. The top supercomputer in the world in 1996 has about the same gross computing power as the iPhone you carry in your pocket. In other words, each of us is carrying around 1996’s top supercomputer in our pocket. There are things you can do now that just weren’t even remotely conceivable in the ’90s in terms of compute power.

Of course, the advent of the Internet and digital technology in general means there’s an enormous amount of data available to analyze. If you have massive amounts of data and lots of compute power, using deep learning is a good way to go about pulling information out.

I generally try machine learning first and if machine learning is good enough to solve the problem, great, because the machine learning tends to be more explainable. Certain classes of algorithms are naturally explainable. They give you some insight into how they made their decision.

Deep learning is more difficult to get those insights out of. Lots and lots of advances have been made in the last couple of years in that area, specifically, and I think it’s going to get better and better as time goes on. But right now, deep learning using neural networks is seen to be more of a black box than the older machine learning algorithms.

As a general rule of thumb, we try a machine learning algorithm like a random forest first, just to learn about our data and get insights into it, and then if we need to, we’ll move on to a deep learning algorithm.

So this is all old technology? What’s new?

In 2012, there was a watershed moment in computer vision when convolutional neural networks were applied to images and they were run on graphical processing units to get the compute power and all of a sudden, algorithms that were seemingly impossible just a few years before became trivial.

When I teach this to my students, I use an example of differentiating cats and dogs. There was a paper published by researchers at Microsoft in 2007 that described an online test to prove that you were human (https://dl.acm.org/doi/abs/10.1145/1315245.1315291). They showed 12 pictures, 6 cats and 6 dogs and you had to pick out the cats. For a human, you can do that with 100% accuracy in a few seconds. You can just pick them out. But for a computer at the time, the best they could hope to get would be 50-60% accuracy and it would take a lot of time. So, it was easy to tell if it was a human or computer picking the images and that’s how they got websites to prove that you were human.

About seven years later, in 2013 I think, there was a Kaggle Competition with prize money attached to develop an algorithm that could differentiate cats from dogs (https://www.kaggle.com/c/dogs-vs-cats). The caption on the Kaggle page said something like, “This is trivial for humans to do, but your computer’s going to find it a lot more difficult.” They provided something like 10,000 images of cats and 10,000 images of dogs as the data set and people submitted their entries and they were evaluated. Within one year all the top-scoring algorithms used convolutional neural nets running on graphical processing units to get accuracies over 95%. Today you can do a completely code-free approach and train a convolutional neural network in about 20 minutes, and it will score above 95% differentiating cats from dogs.

In the space of less than a decade, this went from a problem that was considered basically unsolvable to something that became trivial.  You can teach it in an introductory course and people can train an algorithm in half an hour that then runs instantly and gets near perfect results. So, with computer vision we think of two eras: pre-convolutional neural nets, and post-convolutional neural nets.

Something similar happened recently with Natural language processing (NLP). In late 2018 Google published an NLP algorithm called “BERT” (https://arxiv.org/abs/1810.04805). It generated over 16,000 citations in two years.  That is a tremendous amount of applied and derivative work. The reason for that is because it works so well for natural language applications. Today you can really think about natural language processing as two eras: pre-BERT and post-BERT.

How are these more recent AI advances going to change healthcare and work? Will many people—physicians, technicians—be out of jobs?

My belief is the opposite will happen because this is what always seems to happen with technology. People predict that a new technological innovation is going to completely destroy a particular job type. And it changes the job—but it doesn’t destroy it—and ends up increasing demand.

One of the oldest examples of that was weaving back in the early industrial ages. When they invented the automatic loom, the Luddites rebelled against the invention because they were involved in weaving. What ended up happening was the cost of producing fine, high quality fabrics dropped dramatically because of these automated looms. That actually lowered the price and thereby increased the demand. The number of people employed in the industry initially took a dip, but then increased afterwards.

Similarly, the claim was made that the chainsaw would put lumberjacks out of business. Well, it didn’t. If anything, demand for paper grew. And finally, in the ’70s, when the personal computer and the laser printer came along, people said, “That’s the end of paper. We’re going to have a paperless office.” Nothing could be further from the truth now. We consume paper now in copious quantities because it’s so easy for anybody to produce high quality output with a computer and a laser printer.

I remember when I was a grad student, when MRI first came on the scene and was starting to be widely deployed, people were saying, “That’s the end of radiology because the images are so good. Anybody can read them.” And of course, the opposite has happened.

I think what will happen is you will see AI assisting radiologists and other medical specialists—surgeons, and anesthesiologists, just about any medical specialty you can think of—there’s an application there for AI.

But it will be a power tool; AI is basically a power tool for complexity. If you have the power tool, you’re going to be more efficient and more capable than someone who doesn’t.

A logger with a chainsaw is more efficient and productive than a logger with a handsaw. But it’s a lot easier to injure yourself with a chainsaw than it is with a handsaw. There have to be safeguards in place.

The same thing applies with AI. It’s a power tool for complexity but it’s an amplifier as well. It can amplify our ability to see into and sort through complexity, BUT it can amplify things like bias. There’s a very strong movement in AI right now to look into the effects of this bias amplification by these algorithms and this is a legitimate concern and a worthwhile pursuit, I believe. Just like any new powerful tool, it’s got advantages and disadvantages and we have to learn how to leverage one and limit the other.

I’m curious to get your thoughts on how AI and machine learning are going to impact traditional hypothesis-driven research. How do these tools change the way we think about hypothesis driven research from your perspective?

It doesn’t; it complements it. I’ve run into this repeatedly throughout my career, since I’m in a technical area like medical physics and biomedical engineering, which are heavily populated by traditional scientists who are taught to rigidly follow the hypothesis approach to science. That is called deductive reasoning—you start with a hypothesis, you perform an experiment and collect data, and you use that to either confirm or refute your hypothesis. And then you repeat.

But that’s very much been a development of the 20th century. In the early part of the 20th century and late 19th century, the opposite was the belief. You can read Conan Doyle in Sherlock Holmes saying things like, “One should never ever derive a hypothesis before looking at the data because otherwise you’re going to adapt what you see to fit your preconceived notions.” Sherlock Holmes is famous for that. He would observe and then pull patterns from the data to come up with his conclusions.

But think of a circle. At the top of the circle is a hypothesis, and then going clockwise around the circle, that arrow leads down to data. Hypothesis at the top; data at the bottom. And if you go clockwise around the circle, you’re performing experiments and collecting data and that will inform or reject your hypothesis.

The AI approach starts at the bottom of the circle with data and we take an arc up to the hypothesis. You’re looking for patterns in your data and that can help you form a hypothesis. They’re not exclusionary, they’re complimentary to each other. You should be doing both; you should have that feedback circle. And across the circle you can imagine a horizontal bar are tools that we build and these can be algorithms or they could be a microscope. They’re things that let us analyze or create data.

When people use AI and machine learning, does that reduce the bias that may be introduced by seeking to prove a hypothesis? With no hypothesis, you’re simply looking at your data, seeing what your data tells you, and what signals you get out of your data.

Yes, it’s true that just mining the data can remove my inherent biases as a human, me wanting to prove my hypothesis correct, but it can amplify biases that are present in the data that I may not know about. It doesn’t get rid of the problem of bias.

I’ve been burned by that many, many times over my career. At Mayo Clinic, I was working on a project once and it was analysis of electronic health records to try and predict hospital admission from the emergency department. My first pass on the algorithm, I used machine learning that wasn’t deep learning and I got something like 95% accuracy.

I’d had enough experience at that point that I was not excited or elated by that. My initial reaction was, “Uh-oh, something’s wrong.” Because you’d never get 95%. If it was that easy for an algorithm to make the prediction, people would have figured it out after dealing with these patients for years.

I figured something was up. So, I went back to the clinician I was working with, an ER doc, and looked at the data. It turns out, admission status was in my input data and I just didn’t know because I didn’t have the medical knowledge to know what all those hundreds of variable columns meant. Once I took that data out, the algorithm didn’t perform very well at all.

There’s a lot of work now on trying to build algorithms using the broadest, most diverse data sets that you can. For example, you don’t want to just process images from one hospital. You want to process images from 100 hospitals around the world. You don’t want to just look at hospitals where the patient population is primarily wealthy and well insured. You also want to look at hospitals where there’s a lot of Medicare and Medicaid patients and so on and so forth.

What advice would you give an organization for starting off in AI? Can you fast track your organization to actually get to the point where you can do AI and machine learning?

You can’t fast track it. You cannot. It’s an enormous investment and commitment and it’s often a cultural change in the organization.

My first and foremost advice is you need someone high up in the organization, who probably reports to the CEO, with a deep, extensive background and practical hands-on experience in BOTH healthcare and in artificial intelligence. The worst thing you can do is to hire somebody and give them no staff and no budget. You’re just basically guaranteed that the endeavor is going to fail. You need to be able to give them the ability to make the changes in the culture.

One of the biggest mistakes I see healthcare organizations make is hiring someone who has gone online, taken a couple of courses from Stanford or MIT, watched some YouTube videos, read a couple of papers, and who got “into digital” five or six years ago, and they’re put in place to basically oversee and direct the entire effort of the organization and they really have no experience to do that. It’s a recipe for failure.

You also can’t expect the physician in the organization to easily become an AI expert. They’ve invested 10-15 years of education in their subspecialty and they’re busy folks dealing with patients every day and dealing with horrific IT systems—electronic medical record systems—that make them bend to the technology instead of the other way around.

You want somebody who’s been doing healthcare AI for 20 years and really knows how to use the power tools and where to apply them. But that person has to be able to communicate with the physicians and also has to be able to communicate with the engineers doing the fundamental work.

It’s not a technical limitation that stopping us from advancing this in healthcare. It’s mostly a cultural issue in these organizations and a technical expertise issue.

 Some of the biggest obstacles that I hear when I do these interviews is that the data is not ready for primetime. Organizations really haven’t put the thought into how to structure and organize the data so they really are not AI or ML ready. Have you seen that? And what can an organization do with their data to prepare?

That is very common. Absolutely.

It’s a whole issue unto itself. There’s tons of data out there. You’ve got an electronic medical record system in your large hospital containing all this data. How do you enable people within your organization with the appropriate skills to get at that data and use it to produce an analytic that will improve your outcomes, reduce cost, improve quality… or ideally all?

It’s a cultural issue. Yes, there are technical issues. I’ve seen organizations devote enormous effort into organizing their data and that’s beneficial, but just because it’s organized doesn’t mean it’s clean.

People say, “Oh, this is a great data set.” They’ve spent tons of time organizing it and making sure all the fields are right and cross checking and validating and then we go and use it to build an algorithm and then you discover something systemic about the way the organization collects data that completely throws off your algorithm. Nobody knew and there’s no way to know ahead of time that this issue is in the data and it needs to be taken care of.

That’s why it’s so critical to have a data professional, not just someone who’s a database constructor and filler and can organize a database. You need someone who’s an expert in data science who knows how to find the flaws that may be present in the data and has seen the red flags before.

Remember, my reaction when I got the 95% accurate algorithm wasn’t one of elation. I knew we needed to do a double check there. And sure enough, we found an issue.

I ran into something very similar recently at Moffitt in the way dates were coded. The billing department was assigning an ICD code to the date of admission as opposed to the date when the actual physiological event occurred, and we didn’t pick up on this until six months into the project. It completely changed the way the algorithm worked because the date was wrong and we’re trying to predict something in time. The dates that we had were wrong relative to other biological events that were going on in that patient.

Moffitt has terrific organization of their data. They’ve done one of the best jobs I’ve seen in the health care organizations I’ve worked with. But it didn’t mean the data was clean, it meant that it was accessible. When I wanted to train a model to understand the language of pathology reports, I asked for all of Moffitt’s digital pathology reports and in seven days, I had almost 300,000 pathology reports.

Unbelievable.

Yeah, it was amazing. I was just shocked. That’s the kind of a cultural change that needs to be in place.

Want to read more great content from BioTeam? Subscribe to our newsletter for quarterly updates.

Share:

Newsletter

Get updates from BioTeam in your inbox.