5 questions with: Dr. Mark Michalski, Director of Center for Clinical Data Science at Massachusetts General and Brigham and Women’s Hospitals
If you know someone who works in healthcare – hospital administrator, radiologist, tech, nurses, any clinician across the board – ask them if they’ve heard about the ways that Artificial Intelligence (AI) will impact their work.
Now, ask those same people if they’ve felt that impact. (And if you are one of these healthcare professionals, thank you for reading and please consider sharing a comment below).
Chances are the answer to these two questions will not be the same. Many have heard about AI’s promise, few have actually felt its power. While I think we’ve mostly moved past doubting AI’s potential, the question now seems to be about just how much is hype. I heard this asked by many radiologists a few weeks ago at the Radiological Society of North America’s (RSNA) annual conference, whenever the topic of data, machine learning or AI came up.
One reason for this is healthcare’s complexity – often there have been as many different, disconnected attempts to implement AI in a hospital as there are systems and programs in that hospital.
But the other reason is that there have indeed been hyped up predictions and proposals for what AI will do. All of us in the industry shoulder some of the responsibility for this, though I think a lot of it comes from well-intentioned excitement for the future.
This brings me to my question: should I tell you about our AI platform?
At GE Healthcare, our intelligence platform has been quietly spurring apps and AI-powered devices that are now showing very real results. Dr. Rachael Callcut and a team at UCSF built an initial algorithm capable of detecting pneumothorax, a condition that can be deadly if not diagnosed quickly, alerting the radiologist to read the chest X-Ray sooner, rather than letting it wait in the queue the standard two to eight hours. At several test sites, a workflow is using deep learning and anatomy recognition to teach itself from a database of more than 36,000 MRI images of the brain, so it can shoulder a manual step previously burdening radiologists, giving them more time to focus on the patient and reducing the number of patients called back to redo the exam.
Our employees and partners appropriately call our AI platform “Edison” – after the founder of GE and the determined inventor who wasn’t afraid to learn from his mistakes – but the name will be new to most others. Should we keep it that way?
Yes, GE Healthcare is a business that needs to promote our value and solutions to survive in the marketplace. But after careful consideration, we believe the answer really comes down to our responsibility to share what we know, even if it’s fast changing, to help the industry advance, our partners succeed, and patients receive the best treatment.
But these questions aren’t just for us. Knowing the AI debate is real and thriving, it’s for all in this industry of healthcare to ask when and how we should share our work and plans in AI. And to ask how we do so responsibly and ethically.
These are questions we spend a lot of time considering at GE Healthcare. I can’t say we have all of the answers, but I can say we will keep asking the question of our partners and peers to learn from their insights and perspectives.
As a start, I asked one of healthcare’s AI pioneers to weigh in on the topic overall – where we’re going, what’s working and what we should believe along the way. Here’s what Dr. Mark Michalski, Director of Center for Clinical Data Science at Massachusetts General and Brigham and Women’s Hospitals, had to say:
What is the biggest opportunity with big data that hospitals are missing today?
Dr. Michalski: There is a tremendous amount of opportunity to improve patient care and hospital operations by leveraging data science. For example, we can use data that is often already generated by healthcare systems to predict how a patient will react to a therapeutic based on a quick analysis of how other patients like them reacted. Or similarly, to predict the likelihood of a patient being readmitted for a chronic disease, or missing an appointment. But only if we capture and harness this data. There are a wide range of possibilities, but hospitals need the tools to be able to accomplish all this.
The AI debate – where do you stand? How do you advise we avoid the hype?
Dr. Michalski: I think it’s important to define what we mean by AI. What people mean when they invoke the term AI isn’t always so clear. Perhaps it’s better to say that there are certain tools within machine learning – such as deep learning – which have shown remarkable progress in the last couple years, especially in assisting very important tasks for radiologists such as image segmentation and classification (that is, identifying abnormal findings and highlighting abnormalities in images). Doubt is natural this early in the development and adoption of technology, especially with this much hype. But there will be products that meaningfully incorporate this technology for physicians and patients quite soon.
Why do you think radiologists are uniquely positioned to lead the charge in AI?
Dr. Michalski: The most recent advances in machine learning – such as deep learning – work very well with image and video data, so radiology and other specialties that use image data (like pathology, dermatology, ophthalmology, or radiation oncology and others) have been among the first impacted. Beyond this, radiology has a relatively well-structured set of data, which is the foundation for building machine intelligence. Finally, the radiology community already has made significant investments in IT and tech infrastructure – it is generally familiar with high tech systems (like scanners, integrations with EHRs, etc.), making adoption and integration of machine learning technologies perhaps easier.
How do we balance both ethical and effective AI?
Dr. Michalski: This trade off is one that we think a lot about at the Center for Clinical Data Science (CCDS). Patient access issues impact us in the data sciences, since patients that cannot engage the healthcare system effectively may not be effectively incorporated into the models that we develop. If we aren’t mindful of the limitations, we might end up with systems that could be very useful for one population but not for another. This is a very important consideration when thinking about how to maintain the quality of care for marginalized populations or resource poor settings. It’s critical that this technology is democratized; we need to find ways to make this technology benefit as many people as is possible.
Describe the future of healthcare in 3 words
Dr. Michalski: Quick, quantified, precise.