Getting a quick and accurate reading of an X-ray or some other medical images can be vital to a patient’s health and might even save a life. Obtaining such an assessment depends on the availability of a skilled radiologist and, consequently, a rapid response is not always possible.
For that reason, MIT researchers wanted to train machines that are capable of reproducing what radiologists do every day. Although the idea of utilising computers to interpret images is not new, the researchers are drawing on an underused resource, the vast body of radiology reports that accompany medical images, written by radiologists in routine clinical practice, to improve the interpretive abilities of machine learning algorithms.
The team is also utilising a concept from information theory called mutual information, a statistical measure of the interdependence of two different variables, in order to boost the effectiveness of their approach.
The first step is a neural network is trained to determine the extent of a disease, such as pulmonary oedema, by being presented with numerous X-ray images of patients’ lungs, along with a doctor’s rating of the severity of each case. That information is encapsulated within a collection of numbers. A separate neural network does the same for text, representing its information in a different collection of numbers.
A third neural network then integrates the information between images and text in a coordinated way that maximises the mutual information between the two datasets. When the mutual information between images and text is high, that means that images are highly predictive of the text and the text is highly predictive of the images.
Rather than working from entire images and radiology reports, they break the reports down to individual sentences and the portions of those images that the sentences pertain to. Doing things this way estimates the severity of the disease more accurately than viewing the whole image and whole report. And because the model is examining smaller pieces of data, it can learn more readily and has more samples to train on.
A pilot program is currently underway at the Beth Israel Deaconess Medical Center to see how MIT’s machine learning model could influence the way doctors managing heart failure patients make decisions, especially in an emergency room setting where speed is of the essence.
The model could have very broad applicability. It could be used for any kind of imagery and associated text — inside or outside the medical realm. This general approach, moreover, could be applied beyond images and text.
AI has been adopted in healthcare for multiple purposes. As reported by OpenGov Asia, U.S. Scientists have developed a new, automated, AI-based algorithm that can learn to read patient data from Electronic Health Records (EHR). The scientists, in a side-by-side comparison, showed that their method accurately identified patients with certain diseases as well as the traditional, “gold-standard” method, which requires much more manual labour to develop and perform.
Previously, the researchers showed that unsupervised machine learning could be a highly efficient and effective strategy for mining EHR. The potential advantage of their approach is that it learns representations of diseases from the data itself. Therefore, the machine does much of the work experts would normally do to define the combination of data elements from health records that best describes a particular disease.
Overall the results are encouraging and suggest that the system is a promising technique for large-scale phenotyping of diseases in EHR data. With further testing and refinement, they hope that it could be used to automate many of the initial steps of clinical informatics research, thus allowing scientists to focus their efforts on downstream analyses like predictive modelling.