Cardiology has entered the information age, and modern medical practice now provides us with increasingly vast quantities of complex data which include electronic health records, blood tests, and imaging.
For over 30 years there have been efforts to automate the analysis of these data, including coronary angiograms. The earliest methods used manually programmed algorithms employing traditional methods such as edge detection1. However, recently there has been an explosion in medical computer vision, with significantly more complex analyses becoming possible.
“Deep learning” is the driving force behind this revolution. It represents a paradigm shift in the way we use computers to solve complex problems. Instead of using a traditional algorithmic approach, where a computer programmer instructs a computer to perform an explicit series of steps to solve a problem, we instead allow the machine to learn how to solve the task for itself. An “artificial neural network”, in some ways similar to the mammalian optic cortex, is modelled inside the computer. We show this neural network many thousands of examples of data, such as angiograms, along with the “correct” answer (such as the presence of a coronary stenosis or not). The computer continuously adjusts the strengths of the neural network’s synapses so that it can better perform its task and, hopefully, with time, a successful “digital brain” evolves. This artificial intelligence (AI) approach is often so successful that it can exceed the performance of the humans who train it2.
It often seems difficult now to read a medical journal and not see AI being mentioned at least once. However, despite AI’s successes, bold claims should be met with healthy scepticism. A recent review lamented how common it is for studies not to present results using externally validated data3. This is particularly important, as the greatest risk with any solution using deep learning is the risk of “overfitting”. Overfitting refers to the phenomenon where a neural network will tend to perform very well when shown an angiogram it has encountered during the training process. In simplistic terms, the neural network may learn to “memorise” certain angiograms, rather than learn generalisable principles which will help its decision making on new data. A failure to assess the network properly on completely separate data inevitably results in disappointing real-world accuracy.
In this issue of EuroIntervention, Du et al report the performance of a novel AI solution that uses a neural network to analyse coronary angiograms4.
This system is able to segment out the coronary tree, make measurements of any stenoses, identify the presence of pathologies including thrombus and dissections, and automate SYNTAX score calculations. Whilst its performance is impressive, it is arguably the quality of its data that is most worthy of praise. Over 20,000 angiograms, across seven different angiographic views, from 10,000 patients were used in the study, with the performance being assessed on a set of 1,000 angiograms from separate patients. Two different functional deep neural networks were trained, one for the recognition of the coronary segments, the other for the lesion morphology. Per patient, a total of 20 coronary segments were annotated. For the lesion morphology, expert cardiologists annotated multiple parameters, including calcification, thrombosis and dissection, aside from the standard lesion parameters. Actually, all the work was carried out by 10 certified cardiologists at Fuwai Hospital over a period of 11 months! The average recognition accuracy of coronary segments was found to be 87.6%, while the F1 scores for the various lesion characteristics were above 0.80. Despite these very interesting results, it must be noted that this remains a single-centre study and, as yet, no system has been able to harness the rich temporal data that can be gleaned from angiographic videos (instead, angiograms are interpreted as still images).
Will we see this system, DeepDiscern, in clinical use? There are significant regulatory hurdles that need to be overcome before such a system could be used by cardiologists in the catheterisation laboratory5; the leap from “bench to bedside” is often frustratingly difficult6. The catheterisation laboratories in 2025 may not have DeepDiscern at cardiologists’ fingertips, but most catheterisation laboratories in 2035 will almost certainly integrate similar AI solutions. Whilst Du et al are tackling the automated analysis of coronary angiograms, others are working on similar systems for aortic waveforms and coronary physiology7,8. Given these exciting advances, it may be that in 20 years we will not be asking about AI’s role in the cath lab, but rather the cardiologist’s.
Conflict of interest statement
The authors have no conflicts of interest to declare.