Identification and evaluation of Parkinson's disease using artificial intelligence and nocturnal breathing signals
- Corresponding Author:
- Carol Green
Editorial Office, International Journal of Clinical Skills, London, United Kingdom
- E-mail:ijclinicalskill@journalres.com
Received: 16-August-2022, Manuscript No. ijocs-22-75636; Editor assigned: 19-August-2022, PreQC No. ijocs-22-75636 (PQ); Reviewed: 21-August-2022, QC No. ijocs-22-75636 (Q); Revised: 25-August-2022, Manuscript No. ijocs-22-75636 (R); Published: 28 August-2022, DOI:10.37532/1753-0431.2022.16(8).25 9
Abstract
As of right now, there are no reliable biomarkers for identifying Parkinson's disease (PD) or monitoring its development. Here, using signals
from nocturnal breathing, we created an artificial intelligence (AI) model to identify PD and follow its development. The model was assessed
using data from numerous hospitals in the United States as well as numerous public datasets on a sizable dataset with 7,671 persons. On
held-out and external test sets, the AI model can identify PD with an area-under-the-curve of 0.90 and 0.85, respectively. The Movement
Disorder Society Unified Parkinson's Disease Rating Scale, which is used to measure PD severity and progression, can also be used by the AI model. The AI model employs an attention layer that enables interpretation of its sleep and electroencephalogram predictions. Additionally, the model can detect breathing via radio waves that reflect off a person's body while they sleep in order to diagnose PD in the home environment touchlessly. Our study provides preliminary evidence that our AI model may be helpful for risk assessment before to clinical diagnosis and shows the viability of objective, noninvasive, at-home evaluation of PD.
Keywords
Parkinson's Disease, Nocturnal breathing, Artificial intelligence, Electroencephalogram
Introduction
The world's fastest-growing neurological condition is PD1. As of 2020, more over 1 million Americans lived with PD, placing a $52 billion annual economic burden on society. No medication has been able to halt or reverse the disease's course thus far [1]. The absence of reliable diagnostic biomarkers is a major obstacle in the development of PD medications and disease management. The disease is typically diagnosed based on clinical symptoms, related mainly to motor functions such as tremor and rigidity. However, motor symptoms typically don't show up until years after the disease first manifests, which delays diagnosis. New diagnostic biomarkers are therefore desperately needed, especially ones that can spot the disease before it progresses too far [2]. Additionally, there are no efficient indicators for determining how the disease progresses over time. Today, PD progression is evaluated by the patient themselves or by a doctor using a qualitative rating system. A questionnaire called the Movement Disorder Society Unified Parkinson's Disease Rating Scale is typically used by clinicians (MDS-UPDRS). The MDS-UPDRS lacks sufficient sensitivity to detect subtle changes in patient status and is semi subjective. In order to report changes in MDSUPDRS with sufficient statistical confidence, PD clinical studies must extend for several years, which raises costs and slows down development. There are a few putative PD biomarkers that have been studied in the literature, and among these, cerebrospinal fluid, blood biochemistry16, and neuroimaging show good accuracy [3]. These biomarkers are not appropriate for routine testing to provide early diagnosis or continuous tracking of disease progression because they are expensive, invasive, and require access to specialised medical facilities. As early as 1817, James Parkinson's research found a connection between PD and respiration.
Later studies that revealed respiratory muscle weakness, sleep breathing issues, and degeneration in breathing-controlling brainstem regions19 further supported this connection. Further evidence that the breathing characteristics may be useful for risk assessment prior to clinical diagnosis comes from the fact that these respiratory symptoms frequently appear years before clinical motor signs. Here, we offer a novel AI-based method for diagnosing Parkinson's Disease (PD), estimating disease severity, and monitoring the course of the disease over time utilising nocturnal breathing. A breathing belt worn on the person's chest or abdomen can be used to gather one night's worth of breathing signals, which the system uses as input [4]. As an alternative, the breathing signals can be obtained without the need of wearable technology by sending out a low strength radio signal and observing how it reflects off the subject. This model's learning of the auxiliary task of predicting the person's quantitative electroencephalogram (qEEG) from nocturnal breathing, which prevents the model from overfitting and aids in reading the model's output, is a crucial aspect of its design. Our approach intends to create an objective, undetectable, affordable, and repeatable digital biomarker for diagnosis and progression that can be tested in the patient's home [5].
Results
■ Datasets and model training
The Mayo Clinic, the sleep lab at Massachusetts General Hospital (MGH), observational PD clinical trials funded by the Michael J. Fox Foundation (MJFF) and the NIH Udall Center, an observational study conducted by the Massachusetts Institute of Technology (MIT), and public sleep datasets from the National Sleep Research Resource, such as the SlEEP dataset, are some of the sources of the large and diverse dataset we use (MrOS). The breathing belt datasets and the wireless datasets were separated into two groups. The first group employs a breathing belt to monitor the subject's breathing throughout the night and is derived from polysomnography (PSG) sleep research. The second group uses a radio device to contactlessly capture nighttime breathing. The radio sensor is installed in the user's bedroom, where it monitors radio reflections from the surroundings to identify the breathing signal. Only one or two nights per participant are included in the breathing belt datasets, and neither MDS-UPDRS nor Hoehn and Yahr (H&Y) scores are present. The wireless datasets, on the other hand, contain longitudinal data for up to a year as well as MDS-UPDRS and H&Y scores, enabling us to confirm the model's predictions of PD severity and its development. When testing on the wireless datasets, we restrict ourselves to the PD patients and their age-matched control subjects because some of the people in the datasets are rather young (for example, in their 20s or 30s). The mean value for the control group is applied to any missing MDS-UPDRS or H&Y scores in the control participants
The neural network was not tested on the same subjects that were used for training. For PD identification, we used k-fold cross-validation (k=4), and for severity prediction, we used leave-one-out validation. By using data from other medical facilities to train and test the model, we were also able to evaluate crossinstitution prediction. A final test was the only time the Mayo Clinic data was used, and it was maintained as external data and never accessed during development or validation.
■ Evaluation of PD diagnosis
We further explored whether merging multiple nights from the same person might increase accuracy. Since each subject has multiple nights, we use the wireless datasets (mean (SD) 61.3 (42.5)) and compute the model prediction score for each night. The person is deemed to have PD if their PD prediction score is more than 0.5, which ranges from 0 to 1. As the result of the final diagnosis, we use the median PD score for each participant. Next, we determine how many nights are required to reach a high test-retest reliability. Using the wireless datasets, we average the prediction across several nights within a time range to determine the test-retest dependability. According to the findings, dependability increases when we use multiple nights from the same subject and only requires 12 nights to reach 0.95 (95% CI (0.92, 0.97)).
■ Generalization to external test cohort
We validated our AI model on an external test dataset (n = 1,920 nights from 1,920 subjects, out of which 644 have PD) from an independent hospital not involved during model development in order to evaluate the generalizability of our model across different institutions with different data collection protocols and patient populations. An AUC of 0.851 was reached by our model.
The results show that our model can generalise to a variety of data sources from organisations that were not exposed during training. By testing the model on data from one institution while training it on data from other institutions other than the test institution, we were able to assess the performance of cross-institution prediction. The model obtained a cross-institution AUC of 0.857 on SHHS and 0.874 on MrOS for breathing belt data. Cross-institution performance for wireless data was 0.892 on MJFF, 0.884 on Udall, 0.974 on MGH, and 0.916 on MIT. These findings demonstrate the model's exceptional accuracy on data from sources it was not exposed to during training. Therefore, the accuracy is not attributable to the misattribution of institutionrelated information to the disease or to the exploitation of institution-related information
■ Evaluation of PD severity prediction
The MDS-UPDRS is currently the most widely used tool for assessing the severity of PD, with higher scores indicating more profound impairment. Both patients and doctors must put forth effort in order to evaluate MDSUPDRS: patients are required to physically visit the clinic, and evaluations are carried out by qualified clinicians who classify symptoms using quasi-subjective criteria. By examining the patients' nighttime breathing at home, we test our model's capacity to generate a PD severity score that closely resembles the MDS-UPDRS. Each patient has many nights of measurements, and we use the wireless dataset when MDSUPDRS evaluation is available (n = 53 subjects, 25 PD subjects with a total of 1,263 nights, and 28 controls with a total of 1,338 nights). We contrast the baseline MDS-UPDRS results with the model's median prediction, which was calculated across the nights starting one month after the subject's baseline visit.
■ PD risk assessment
Since respiration and sleep are affected early in the progression of PD4, we believe that our AI model may be able to identify people with PD before they are officially diagnosed. We used the MrOS dataset, which contains breathing and PD diagnoses from two different visits separated by roughly six years, to assess this capability. The participants we looked at are referred to as the "prodromal PD group" (n=12) since they developed PD by their second visit but not by their first. We choose individuals from the MrOS dataset who did not receive a PD diagnosis during either the initial visit or the second appointment, which took place six years later, to serve as the "control group."
■ PD disease progression
MDS-UPDRS, which is semisubjective and has insufficient sensitivity to detect subtle, gradual changes in patient status, is currently used to assess the development of Parkinson's disease. Therefore, before changes in MDS-UPDRS can be reported with appropriate statistical certainty, PD clinical trials must persist for a number of years, which poses a significant obstacle for medication development. Clinical trials for PD might be shortened by a progression marker that detects statistically significant changes in disease status over brief periods of time. We used the data from the baseline, the month after the month-6 visit, and the month after that to compute the median MDS-UPDRS prediction in order to determine the change in the predicted MDS-UPDRS. The median at baseline was then deducted from the median at month six. The same process was used again to calculate the difference in prediction between month 12 and baseline. The estimates of changes in MDSUPDRS made by the model over the same time periods, however, are statistically significant. Our model's ability to combine measures from several nights is a significant factor in its ability to attain statistical significance for progression analysis while the clinician-scored MDS-UPDRS cannot. Any measurement contains some noise, whether it is the model-predicted MDS-UPDRS or the clinician-scored MDS-UPDRS. One can lower the noise and increase sensitivity to illness progression over a brief period by combining a lot of samples.
Because the measurements may be repeated every night without causing any additional burden to the patients, this is possible for the modelpredicted MDS-UPDRS. It is not practicable to ask the patient to return to the clinic every day to repeat the MDS-UPDRS test, thus one cannot do the same for the clinician-scored MDS-UPDRS. In the same way as clinicianscored MDS-UPDRS failed to reach statistical significance, the model-predicted MDS-UPDRS would also fail to follow progression over a single night.
Discussuion
This study offers proof that AI can recognise PD patients based on their nighttime breathing patterns and can properly determine the severity and course of the disease. We were able to validate our results in a separate external PD group, which is significant. The outcomes indicate the promise of a fresh digital biomarker for Parkinson's disease. This biomarker possesses a number of useful qualities. It functions as a biomarker for progression and diagnosis. Since it is objective, neither the subjectivity of the patient nor the therapist affects it. It is simple and non-intrusive to measure in the subject's home. Furthermore, measurements can be taken each night without touching anything by using wireless signals to track respiration.
Our findings have a number of ramifications. First, by potentially decreasing the cost and length of PD clinical trials, our strategy could speed up the development of new drugs. The typical price and length of PD drug development are roughly $1.3 billion and 13 years, respectively. As a result, few pharmaceutical companies are interested in developing novel treatments for PD13. Since PD is a slowly-progressing disease, it takes several years for present tools to identify progression because they are imprecise and unable to detect minor changes. Our AI-based biomarker, in contrast, may be more sensitive to PD's gradual changes than previously thought. This can assist cut down on costs, expedite progress,and shorten clinical trials. Due to the fact that measures may be taken at home with no expense to the patients, our approach can also increase patient recruitment and decrease churn. Second, roughly 40% of those with PD are not currently being treated by a PD expert. This is because people with PD are geographically dispersed and have difficulty going to such centres due to old age and poor mobility, whereas PD specialists are concentrated in medical centres in urban areas.
Additionally, our study has several drawbacks. A nonhomogeneous disease, PD has numerous subgroups. We did not investigate the various PD subtypes or if our technique is equally effective with each subtype. The fact that the preclinical diagnosis and progression analyses were only validated in a small number of subjects is another drawback of the article. Future research involving bigger populations will be necessary to further support those findings. Additionally, even though we have demonstrated that our system can distinguish between Parkinson's disease and Alzheimer's disease, we did not test if our model can distinguish between Parkinson's disease and other, more widespread neurological disorders. Even though we evaluated the model across institutions and using independent datasets, more research may be able to use a wider variety of institutions and datasets.
References
- Maltseva N, Borzova E, Fomina D, et al. Cold urticarial-What we know and what we do not know Allergy. Eur Acad Allergy Clin Immunol 76, 1077-1094 (2021). [Google Scholar] [CrossRef]
- Maurer M, Metz M, Bindslev Jensen C, et al. Definition, aims, and implementation of GA(2) LEN urticaria centers of reference and excellence Allergy. Eur Acad Allergy Clin Immunol 71, 1210-1218 (2016).[Google Scholar] [CrossRef]
- Ramos-Casals M, Stone JH, Cid MC, et al. The cryoglobulinaemias. Lancet 379, 348-360 (2012).[Google Scholar] [CrossRef]
- Bracken SJ, Abraham S, MacLeod AC. Autoimmune theories of chronic spontaneous urticarial. Front Immunol 10, (2019).[Google Scholar] [CrossRef]
- Koeppel MC, Bertrand S, Abitan R, et al. Urticaria caused by cold. Ann Dermatol Venereol 123, 627-632 (1996).