Automating AI for Clinical Decision-Making
Some artificial intelligence (AI)-based techniques are helping improve prediction of risks for certain diseases or disorders. Machine-learning models, for example, can be trained to find patterns (“features”) in patient data and then determine a patient’s risk of having lung cancer.
You might also like: Machine Learning to Recognise Cardiac Arrests in Emergency Calls
Interestingly, a new machine-learning model developed by MIT researchers could accelerate the use of AI to improve medical decision-making. A notable improvement in the MIT model is the automation of the process called “feature engineering”, where experts try to identify important features in massive patient datasets by hand.
Experts need to find/annotate these aspects, or “features”, in the datasets that will be useful for making predictions. Already a tedious and expensive process, feature engineering is becoming more challenging with the increasing use of wearables and remote medical devices – ie, rapid collection of vast amounts of patient data.
According to the MIT researchers, their new model was able to automatically identify voicing patterns of people with vocal cord nodules and, in turn, use those features to predict which people do and don’t have the disorder. These features used in the MIT study come from a dataset of about 100 subjects, each with about a week’s worth of voice-monitoring data and several billion samples. The dataset contains signals captured from a little accelerometer sensor mounted on subjects’ necks, the researchers explain. The research team conducted experiments to test the performance of the new model, which was able to identify, with high accuracy, patients with and without vocal cord nodules. These are lesions that develop in the larynx, usually caused by patterns of voice misuse such as belting out songs or yelling.
As noted by the MIT team, their model was able to make accurate predictions of risks in the study patients without a large set of hand-labelled data. “It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labelling the dataset,” according to lead author Jose Javier Gonzalez Ortiz, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”
Traditionally, experts would need to manually identify features that may be important for a model to be able to detect certain diseases. That helps prevent a common machine-learning problem in healthcare: overfitting. That’s when, in training, a model “memorises” subject data instead of learning just the clinically relevant features. In testing, those models often fail to discern similar patterns in previously unseen subjects.
The MIT model, the researchers point out, performed accurately in both training and testing, indicating it’s learning clinically relevant patterns from the data, not subject-specific information. Importantly, the MIT model can be adapted to learn patterns of any disease or condition. But the ability to detect the daily voice-usage patterns associated with vocal cord nodules is an important step in developing improved methods to prevent, diagnose, and treat the disorder, the researchers explain. That could include designing new ways to identify and alert people to potentially damaging vocal behaviours, adds the MIT team that included researchers from Massachusetts General Hospital’s Center for Laryngeal Surgery and Voice Rehabilitation.