Cutting Edge

Imaging and AI (Deep Learning) in the Cath Lab

Jeff Sorenson, President and CEO

TeraRecon, a SymphonyAI Group company, Durham, North Carolina

Jeff Sorenson, President and CEO

TeraRecon, a SymphonyAI Group company, Durham, North Carolina

How is artificial intelligence (AI) interacting with computed tomography angiography (CTA) to help interventionalists in their decision making?

The power and potential of coronary CTA is now being fully recognized. It is a reimbursed procedure and soon there will be four times as many coronary CTAs being performed as there are now. The question is, who can read them? How hard is it to get trained to read them? How useful is the output from post processing for the interventionalist? There is more information in CTA images than is able to be extracted in human-intensive workflows. Even when CTA is used as a selection tool, can the study be prepared so that more clinicians can benefit from the knowledge it contains? Can we deliver an analysis right into the procedure room so that the physician is better informed? CTA shows coronary calcification and soft plaque, while angiography only shows the lumen. Are people calculating and analyzing soft plaque on coronary CTAs? This is a quintessential moment because we have the technology available — artificial intelligence. AI in imaging is actually what is called deep learning, inferring differences and similarities among images. As a result of deep learning, we can, for example, infer the existence of a stenosis, vulnerable plaque, or calcification.

TeraRecon’s core business is offering a very broad set of tools that can serve a whole hospital, including the cardiologist. Our ability to provide CTA image processing and delivery (integrating results into the PACS) was both a technological project and an infrastructure project. We focused for years on developing scalability. Where the world is headed now is near button-free and completely automatic. Image processing and delivery needs to be web-based and have the genius of AI processing algorithms available to contribute to the interpretations; thus, it has to be an open system. When TeraRecon first looked at this idea, we realized there was no system in the world that could accomplish these things. There was no app or player for this content. There was no way to integrate these tools into the cath lab. There was no way to put real AI in the image interpretations systems of different vendors. That has been our journey. TeraRecon created and built the Eureka AI platform, a web-based interaction tool that can display the results of advanced image interpretation from AI. Eureka AI is a tool with all of the fidelity of the coronary CTA analysis, with all the vessel center-aligning tools, and the ability to manipulate and see the AI findings. You don’t need buttons. You don’t need a separate workflow to achieve it. AI does the processing and creates a display. That has value to the non-invasive cardiologist. It has value to the interventionalist. It works the same in your EMR as it does on your phone, as it does in your fax, as it does in the procedure room. TeraRecon is not affiliated with a particular CT scanner and we don’t build cath labs. We are a software company building advanced processing solutions. The use of Eureka AI means the physician will walk into the intervention lab with a plan. Instead of just making a stick, they will walk in with a plan before even starting the procedure.

How does Eureka AI inform the physician’s workflow and approach to intervention?

Historically, companies take technology and create some kind of novel output. Then they try to get validation in order to convince a doctor to use it. That is the whole problem. It can’t work, because physicians are busy people. Let’s turn it upside down and say, “What if I give the physicians a result, and they can choose to use it or not?” Imagine in a cath lab, all I did was tell them what the calcium load was, what the soft plaque burden was. I gave them a rotational tumble view of the anatomy, which I can do completely automatically. I put that in their left eye. You can’t tell a doctor what to do, but you can show them what they’re doing. The reason physicians, 99.99% of the time, do not use information is because it is impossible to achieve in the workflow or because they haven’t yet run into circumstances where they would value it. But once they do, they will always use it. With the automation of AI, we put additional information that is relevant to their case in their peripheral vision. That can be pre procedurally or that can be during the intervention. The physician knows when they are better informed — earning that belief is the trick of fitting into their workflow.

Eureka AI is a web-based tool that can consume and present this content. It is interactive. I am able to give them the view they want, the layout they want, or to even reduce the amount of content in the view. It is all programmable. The platform is recording everything: every layout, every interaction, every view, every payload, every source, every output file, everything. That is what AI needs in order to learn. Physicians can interact with the data. If they need an answer and they don’t believe it, they can get to belief. I can measure the difference between what I gave them and what they did. In fact, if they save the information or use it, I can measure that as well. The hurdle to adoption becomes so much lower with this interactivity and is why we had to build the whole platform from the ground up. Such a system doesn’t exist anywhere in the world, except for the Eureka AI platform.

Can you describe the ongoing learning that is done by the platform?

It is tracking what the user is doing in order to anticipate the proper layouts. AI systems generally are not ready to be run in an open loop, just like a mammography system where you have to keep watching the brightness and contrast, and evaluating for accuracy. What happens with AI today is all the feedback is collected, it is used as training data, and it is measured against the previous data. Then you can see if it is has measurably improved. More data doesn’t always mean better, but we can measure better. At the point of care where the physician is, we can certainly use the data to learn their preferences, so that if we’re giving them maybe four things, maybe they only want two or maybe they want six. We can definitely learn that as well.

The trick is that systems today just burn the findings into images. Images are static things you put up on a viewer. That is just fundamentally flawed.

Does platform interaction occur more during the pre-procedure timeframe or during the case?

I think it is going to be a mix. For example, let’s say the physician wanted to know the significant stenoses and the minimum diameter. Most cardiologists, based on their busy workday, would say, “All I need to know is mild, moderate, severe, and give me a little map.” Yet post procedurally, they may actually say, “Hey, I want to go back and look at what I got into.” Circumstances may cause them to see something on the screen and then, next time, say, “I want to check it out ahead of time.”

AI, which is a buzzword, is just math. AI is not even a thing. It is a technology. It is not a product or a feature. It is just a form of math. For example, AI can get good enough over time to tell physicians the stenosis severity and the confidence level. If I tell you, I am 97% sure that there is a left main stenosis but I am only 32% confident, that probably is all you need to know. I don’t have a lot of information for you. If I say I have five 30% stenoses and I’m 97% confident, now you are sure I am finding diffuse disease. It may be that the patient has only had a nuclear stress test showing some sort of defect. Then what I am offering is valuable information: that you are walking into a case with diffuse disease. You may need multiple stents and maybe you should do some other work ahead of time, such as opening up tools to review the images and rotate around the coronary vessels before ever making the stick.

How is Eureka AI different from technologies like image co-registration, such as CTA overlaid onto fluoroscopy?

That is really image fusion: taking two modalities and using the CT as a way to be oriented to the procedure. That is an important thing to do. In the context of the cath lab, I think of that as being the road map physicians use to guide the procedure. Image fusion does not offer to auto analyze the case pre intervention, which requires a system that can work with all modalities and all vendors, and then create informative views out of those data sets. That is our sweet spot. We are the largest vendor neutral advanced visualization solution provider. Instead of tools being integrated into a suite, we are actually an AI advanced visualization platform that can work pre and post procedure, and work across various imaging modalities. That has to be the case, doesn’t it? Physicians work at many different facilities. We are able to wrap the physician with tools and patient information that works in many different places, and do it with a level of automation that doesn’t require it to be a user-driven experience. It is a very different approach. When you look at pre-intervention planning, I’ll be the last one to suggest physicians change their workday, because I truly empathize with the difficulties. I like to ask about something I think of as the miracle two minutes. Instead of thinking about what happens during any given procedure, think about everything that couldn’t happen. Most of those things are things that take 20 minutes to do in prep or are difficult procedure planning: getting yourself completely oriented. Maybe running different types of AI analysis. We ask a physician, “Are there things that you would do for your niece that you can’t do for everybody every day? What are those things?” What we can do for the physician is take those 20 minutes of work and put it into two minutes, or maybe a minute, or maybe zero minutes, in the Eureka AI display. It changes the art of the possible. It changes what information physicians can bring with them into the lab and what they can do for every patient. There are many ways I can buy a physician two minutes. If you spend it back across all of their patients, the impact is remarkable. The opportunity is for technology to not just be a whizbang thing that is cool, and that four doctors like and nobody else uses, but actually be a part of a personalized daily practice of medicine. It is always going to be delivered in horizontal, enterprise-wide platforms. It is never going to be delivered in vendor-specific suites. It will never be hardware. It will be software and AI platforms. That is why SymphonyAI Group, which purchased TeraRecon last year, was such an incredible fit for our company. Their whole thesis is to invest in systems of intelligence creating dashboards of information that provide a super high impact across any vertical. We are multivendor, and offer advanced visualization that is AI powered and offers a high return on investment across many different types of procedures.

We have talked a lot about pre-procedural planning and interaction with CTA. Are there other aspects of Eureka AI that we haven’t discussed?

Yes. Taking the same platform, you don’t have to work on just cross-sectional imaging. In the cath lab, cardiologists often use ultrasound as an imaging modality. It is an instance where powerful forms of deep learning can be used to guide the technologist to create a better image. Using the cath lab fluoroscopy image in understanding what the CT looks like, you can create a fused, better understanding of the anatomy or even a different type of imaging that is fused together, one inferring the likelihood of soft plaques in different locations, but based on fluoroscopy. The way deep learning works is by always creating an assumption. What AI does with deep learning is it infers. It is basically saying, “I assume,” with a certain confidence. If you collected 100,000 cath lab studies and 100,000 CTAs that analyze soft plaque, you could train an algorithm that answers the question of, how do you infer or assume the look of a known soft plaque on a fluoroscopy image that would be near-impossible to identify with the human eye? In the cath lab, there is also a lot of focus on measuring fractional flow reserve. This is another interesting application of AI where it is possible to look at things like blood flow, or to discern pressures and the physiologic consequence of anatomic anomalies. Creating real-time tools that can use more patient information than is known by any individual and project it in a way that the physician could never do is actually, I would dare say, easy. The hard part is putting it into the workflow and creating the user experience. To train a model like what I described above, if those huge amounts of data are available, is maybe a two- or three-month process. The hard part is making it matter in the clinical workflow of the physician without disrupting them. That is where we have focused. We want to offer impact exactly at the right times so that physicians are better informed and can make better treatment decisions. There are no other AI interoperability platforms that work across modalities, across vendors, and across specialties. For us, it was a whole lot of money, a whole lot of hands on keyboards for a whole lot of years, and also a number of patents on the technology.

Do people fully perceive the advantages and potential of AI or deep learning?

No. It is understandable. I had never experienced it for myself until I bought a Tesla a few months ago. In theory, having a car that can self-drive sounds cool, but the first time you do it, it is unnerving. You have some basic questions that focus around, how do I get back where I started if something goes wrong? In reality, too, the experience is never as good as advertised. You have to understand what the limitations and the benefits of the tool are, which I didn’t at first. Yet now I have a hard time driving a different car. It is not that I’m not paying attention, but rather, it is the feeling of, why do I have to be attached directly so often to make this tool (car) do what I want? When AI was introduced, the wider cultural perception was that AI is all about robots. Deep learning isn’t a robot. Deep learning in medicine is a math equation that is running on image data. That is very far from a “robot” that is going to replace a physician. With healthcare consolidating, reimbursements dropping, procedure volumes increasing, and physician shortages, the flexibility has been slipping away from physicians and they are being told, “Just do exactly this every time.” The power of technology is that we can bring some personalization back and importantly, we can buy some time back for the physician. We can start incorporating additional thoughtfulness into the procedures they are already doing. In order to think that AI is going to take your job, you have to believe you are never going to change. If you think about AI evolving to replace the absolute need for a cardiologist, it’s ridiculous. AI can replace tasks, but it can’t replace the physician. 

Learn more or contact TeraRecon at https://www.terarecon.com