Automating AI for Clinical Decision-Making

Some artificial intelligence (AI)-based techniques are helping improve prediction of risks for certain diseases or disorders. Machine-learning models, for example, can be trained to find patterns (“features”) in patient data and then determine a patient’s risk of having lung cancer.

You might also like: Machine Learning to Recognise Cardiac Arrests in Emergency Calls

Interestingly, a new machine-learning model developed by MIT researchers could accelerate the use of AI to improve medical decision-making. A notable improvement in the MIT model is the automation of the process called “feature engineering”, where experts try to identify important features in massive patient datasets by hand.

Experts need to find/annotate these aspects, or “features”, in the datasets that will be useful for making predictions. Already a tedious and expensive process, feature engineering is becoming more challenging with the increasing use of wearables and remote medical devices – ie, rapid collection of vast amounts of patient data.

According to the MIT researchers, their new model was able to automatically identify voicing patterns of people with vocal cord nodules and, in turn, use those features to predict which people do and don’t have the disorder. These features used in the MIT study come from a dataset of about 100 subjects, each with about a week’s worth of voice-monitoring data and several billion samples. The dataset contains signals captured from a little accelerometer sensor mounted on subjects’ necks, the researchers explain. The research team conducted experiments to test the performance of the new model, which was able to identify, with high accuracy, patients with and without vocal cord nodules. These are lesions that develop in the larynx, usually caused by patterns of voice misuse such as belting out songs or yelling.

As noted by the MIT team, their model was able to make accurate predictions of risks in the study patients without a large set of hand-labelled data. “It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labelling the dataset,” according to lead author Jose Javier Gonzalez Ortiz, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”

Traditionally, experts would need to manually identify features that may be important for a model to be able to detect certain diseases. That helps prevent a common machine-learning problem in healthcare: overfitting. That’s when, in training, a model “memorises” subject data instead of learning just the clinically relevant features. In testing, those models often fail to discern similar patterns in previously unseen subjects.

The MIT model, the researchers point out, performed accurately in both training and testing, indicating it’s learning clinically relevant patterns from the data, not subject-specific information. Importantly, the MIT model can be adapted to learn patterns of any disease or condition. But the ability to detect the daily voice-usage patterns associated with vocal cord nodules is an important step in developing improved methods to prevent, diagnose, and treat the disorder, the researchers explain. That could include designing new ways to identify and alert people to potentially damaging vocal behaviours, adds the MIT team that included researchers from Massachusetts General Hospital’s Center for Laryngeal Surgery and Voice Rehabilitation.

AI Tool Uses Radiology Reports for Cancer Outcomes

Researchers from the Dana-Farber Cancer Institute have developed an artificial intelligence (AI) tool which is just as effective as their human counterparts in gathering information on cancer progression from unstructured radiology reports. The study published in JAMA Oncology showed how AI could not only distinguish cancer presence and the outcomes of the patient but could also complete this at a faster rate.

Electronic health records (EHRs) have the potential to hold a lot of data about a patient. However, further information regarding cancer progression is normally only noted in the text of the medical record, unless the patient is a participant of a clinical trial. This unstructured information meant that this information was previously not available for computer analysis and further reviews on this had to be conducted manually.

Kenneth Kehl, M.D., M.P.H., medical oncologist at Dana-Farber and corresponding author, said the goal of the study was to discover whether the AI technology could extract important information about the cancer status of the patient from radiology reports. Centres such as Dana-Farber can generate a lot of information about patients and their progress, for example, through projects such as the Profile initiative at Dana-Farber/Brigham and Women’s Cancer Center. This study allowed researchers to analyse tumour samples and build profiles on genetic variants which could impact how patients would respond to certain treatments. However, as Kehl explains, the difficulties in implementing precision medicine don’t always lie in gathering the data but rather in applying this information effectively to determine patient response.

In this latest study, over 14,000 imaging reports were manually reviewed for 1,112 patients using the PRISSMM framework. PRISSMM is a phenomic data standard which can translate the unstructured data from the medical text in EHRs and convert this to readily-analysed material. Using this method takes into account any symptoms, pathology, molecular markers and also the medical oncologists’ assessment to predict patient outcomes.

Initially, human reviewers generated outcomes based on the imaging text reports and used this to ‘teach’ a deep learning computer model to predict the extent of cancer presence over time. Outcomes were classified by progression-free survival, disease-free survival and time to improvement/response. The results of the study showed that AI could replicate the outcomes found by human reviewers. Researchers then went on to let the AI algorithm to predict the outcomes of 1,294 patients from 15,000 reports that had not first been manually reviewed. Again, results were predicted at a similar accuracy to human reviewers.

The Pitfalls of AI in Healthcare: Bias and Faulty Anonymisation