Published on in Vol 23, No 11 (2021): November

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/26524, first published .
Noncontact Sleep Monitoring With Infrared Video Data to Estimate Sleep Apnea Severity and Distinguish Between Positional and Nonpositional Sleep Apnea: Model Development and Experimental Validation

Noncontact Sleep Monitoring With Infrared Video Data to Estimate Sleep Apnea Severity and Distinguish Between Positional and Nonpositional Sleep Apnea: Model Development and Experimental Validation

Noncontact Sleep Monitoring With Infrared Video Data to Estimate Sleep Apnea Severity and Distinguish Between Positional and Nonpositional Sleep Apnea: Model Development and Experimental Validation

Original Paper

1Kite Research Institute, Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada

2Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada

3Vector Institute, Toronto, ON, Canada

4Department of Computer Science, University of Toronto, Toronto, ON, Canada

*these authors contributed equally

Corresponding Author:

Babak Taati, PhD

Kite Research Institute

Toronto Rehabilitation Institute

University Health Network

550 University Ave

Toronto, ON, M5G 2A2

Canada

Phone: 1 416 597 3422 ext 7972

Email: babak.taati@uhn.ca


Background: Sleep apnea is a respiratory disorder characterized by frequent breathing cessation during sleep. Sleep apnea severity is determined by the apnea-hypopnea index (AHI), which is the hourly rate of respiratory events. In positional sleep apnea, the AHI is higher in the supine sleeping position than it is in other sleeping positions. Positional therapy is a behavioral strategy (eg, wearing an item to encourage sleeping toward the lateral position) to treat positional apnea. The gold standard of diagnosing sleep apnea and whether or not it is positional is polysomnography; however, this test is inconvenient, expensive, and has a long waiting list.

Objective: The objective of this study was to develop and evaluate a noncontact method to estimate sleep apnea severity and to distinguish positional versus nonpositional sleep apnea.

Methods: A noncontact deep-learning algorithm was developed to analyze infrared video of sleep for estimating AHI and to distinguish patients with positional vs nonpositional sleep apnea. Specifically, a 3D convolutional neural network (CNN) architecture was used to process movements extracted by optical flow to detect respiratory events. Positional sleep apnea patients were subsequently identified by combining the AHI information provided by the 3D-CNN model with the sleeping position (supine vs lateral) detected via a previously developed CNN model.

Results: The algorithm was validated on data of 41 participants, including 26 men and 15 women with a mean age of 53 (SD 13) years, BMI of 30 (SD 7), AHI of 27 (SD 31) events/hour, and sleep duration of 5 (SD 1) hours; 20 participants had positional sleep apnea, 15 participants had nonpositional sleep apnea, and the positional status could not be discriminated for the remaining 6 participants. AHI values estimated by the 3D-CNN model correlated strongly and significantly with the gold standard (Spearman correlation coefficient 0.79, P<.001). Individuals with positional sleep apnea (based on an AHI threshold of 15) were identified with 83% accuracy and an F1-score of 86%.

Conclusions: This study demonstrates the possibility of using a camera-based method for developing an accessible and easy-to-use device for screening sleep apnea at home, which can be provided in the form of a tablet or smartphone app.

J Med Internet Res 2021;23(11):e26524

doi:10.2196/26524

Keywords



Sleep apnea is a chronic respiratory disorder occurring due to frequent respiratory airflow reduction during sleep. Cessation of airflow lasting for more than 10 seconds is called apnea, whereas partial reduction in airflow by more than 30% for at least 10 seconds—in association with more than a 3% drop in blood oxygen saturation level or arousals—is called hypopnea. Sample images indicating the chest movements during normal breathing, hypopnea, and apnea are shown in Figure 1. The apnea-hypopnea index (AHI) is an indicator of the severity of sleep apnea, which measures the hourly occurrence rate of apneas and hypopneas [1]. Untreated sleep apnea raises the risk of hypertension, heart diseases, and stroke [2].

Figure 1. Sample sum of chest and abdomen movements in (A) apnea, (B) hypopnea, and (C) normal breathing.
View this figure

Positional sleep apnea refers to sleep apnea patients for whom the AHI in the supine sleeping position is at least 50% higher than that in the nonsupine sleeping positions [3]. Recent studies have shown that changing to a lateral sleeping position can decrease the AHI for patients with positional sleep apnea [4]. This behavioral intervention is known as “positional therapy,” and is an effective noninvasive and nonpharmaceutical treatment for those with positional sleep apnea [5].

The current clinical approach to diagnose sleep apnea and to determine whether or not it is positional is based on polysomnography (PSG). However, PSG requires connecting more than 20 sensors to a user, which is inconvenient. A trained sleep technician manually analyzes recorded PSG signals and annotates the sleep position overnight. Moreover, PSG is expensive (>US $400) and has a long waiting time in some areas (4-36 months in Canada [6]). As a result, up to 85% of the population at risk of sleep apnea remain undiagnosed [7]. It is therefore useful to investigate screening technologies that could identify individuals at high risk via a simpler test. Increasing access to testing, diagnosis, and subsequent treatment could improve the patient’s quality of life by decreasing hypertension and sleepiness, and can also reduce overall health care costs [8-10].

Researchers have developed several easy-to-use, convenient, and accessible methods for sleep apnea monitoring. Merchant et al [11] developed a skin-adhesive patch recording nasal pressure, blood oxygen saturation, pulse rate, respiratory effort, sleep time, and body position to estimate the AHI. Ayas et al [12] evaluated the performance of a wrist-worn device utilizing a peripheral arterial tonometer, actigraphy, and arterial oxygen saturation to diagnose sleep apnea. Varon et al [13] introduced a method for the automatic detection of sleep apnea from single-lead electrocardiogram by training a least-squares support vector machines classifier on the features extracted from the electrocardiogram signal. Several studies estimated AHI and respiratory events from analyzing tracheal sound or tracheal movements, or the combination of tracheal sound with oxygen saturation [14-18]. Lévy et al [19] utilized pulse oximetry to quantify arterial oxygen saturation and to diagnose sleep apnea.

Although these methods are more convenient than PSG, sensors attached to the body could potentially disrupt the user’s regular sleep pattern. Therefore, researchers have continued to develop noncontact methods to screen individuals at risk of sleep apnea. For example, we previously developed a deep-learning model to distinguish between different types of apnea. However, as the model was not capable of detecting events, we used ground truth labels for this purpose [20]. Jakkaew et al [21] used a thermal camera to estimate breathing rate and body movements; however, they did not analyze the breathing pattern to identify sleep apnea, and the method was not designed to detect sleep position. Deng et al [22] used six active infrared cameras and a Kinect sensor to detect body position and breathing pattern (abnormal vs normal breathing). However, they did not evaluate their method in a clinical environment to demonstrate the performance for the detection of sleep apnea or positional sleep apnea. In addition, using six cameras and the Kinect will be difficult to set up in clinical or home settings, which hinders large-scale adoption. Davidovich et al [23] developed a new framework to extract the breathing pattern from a piezo-electric sensor placed under the patient’s mattress through extracting time and frequency domain features and then calculating the AHI. Nandakumar et al [24] used a smartphone to emit inaudible waves and to analyze the waves’ echoes from the user’s body to detect respiratory events. However, these noncontact methods did not present cross-validation performance, and due to restriction in their modalities, they are not able to identify positional sleep apnea patients, which is crucial for proper treatment.

To identify patients at risk of sleep apnea and to distinguish those with positional sleep apnea, an alternative is to use computer vision and machine-learning techniques. We here propose a noncontact algorithm that analyzes infrared videos captured from a participant during sleep to estimate the AHI and to distinguish patients with positional vs nonpositional sleep apnea. Specifically, we used a 3D convolutional neural network (CNN) to analyze movements in infrared videos, to detect apneas, and to estimate the AHI. In experimental evaluation, this model outperformed a baseline model that previously reported state-of-the-art results in noncontact AHI estimation [25]. We also combined this technique with another CNN-based approach that detects the sleeping position [26] to calculate the AHI in different sleeping positions and to identify patients with positional sleep apnea. The methods and results developed in this study represent the first noncontact approach to automatically distinguish positional from nonpositional sleep apnea.


Data Collection

The University Health Network Research Ethics Board approved this study (approval number 13-7210-DE). Participants aged 18 to 85 years and without a history of cardiovascular or renal diseases were recruited for this study. Participants were recruited among patients referred for sleep diagnosis at the sleep laboratory of the Toronto Rehabilitation Institute, University Health Network. All participants signed a written consent form before taking part in the study. There were no limitations on blanket usage, movement, or clothing worn during sleep.

Simultaneously with overnight PSG (Embla s4500) that was used for a clinical diagnosis of sleep, infrared videos of participants were recorded at a resolution of 640×480 with 30 frames per second. The participants’ video data were collected and synchronized with PSG signals all night for 5 (±1) hours while sleeping in a single session.

The infrared camera (Point Grey Firefly MV, 0.3 MP, FMVU-03MTM) was mounted approximately 1.5 meters above the bed. For illumination, a separate infrared light source (Raytec RM25-F-50) was mounted on the ceiling. A schematic of the camera setup and sample frame is shown in Figure 2.

Figure 2. Data collection setup and a sample anonymized image frame on the right. IR: infrared.
View this figure

Respiratory events (apneas and hypopneas) and sleep positions (supine, lateral) of the participant throughout the night were annotated by a trained sleep technician who was blinded to the study. Since the video data were synchronized with PSG data, once the technician annotated the PSG data, all video frames were automatically labeled.

AHI Estimation

The video frames were first downsampled from 30 Hz to 2 Hz to reduce the computational cost. As breathing frequency is approximately 0.5 Hz during sleep, the reduced frequency of 2 Hz exceeds the Nyquist rate by a factor of 2. To track respiratory movements in the infrared video frames, a CNN dense optical flow (Flownet 2.0 [27]) was used, which provides accurate optical flow at a fast frame rate. Optical flow extracts movement in the x (side to side) and y (up and down) directions for each pixel in one video frame to the next. The minimum duration of an apnea is 10 seconds. This translates to 20 (or 19 in the worst case) video frames within the duration of an event. To estimate respiratory events, a 3D-CNN was trained on a sliding window of 18 optical flow images (ie, resulting from 19 consecutive video frames). Infrared videos were captured at a resolution of 640×480 pixels, resulting in optical images with a size of 640×480×2. The architecture of the 3D-CNN that was trained on the input tensors with a size of 640×480×2×18 is shown in Multimedia Appendix 1. Sample input and dense optical flow images are shown in Figure 3.

Figure 3. Sample input and dense optical flow images.
View this figure

The 3D-CNN was trained with class-weighted cross-entropy loss (5 for events and 1 for normal) and the Adam optimizer. An initial value of 0.001 for the learning rate and a batch size of 25 for 25,000 epochs were chosen. The total number of parameters in this network was 8,284,265, including 8,281,829 trainable parameters and 2436 nontrainable parameters. Depending on the sleep apnea severity, respiratory events are less frequent in comparison to normal breathing; thus, the data sets were highly imbalanced. In training time, to balance the data set, stride lengths of 0.5 and 15 seconds were used for apneas and normal breathing, respectively. In test time, a stride length of 0.5 seconds was used to predict the binary label of normal breathing versus apneas. The threshold of the trained binary classification (event vs normal) was set to 0.1 to maximize the area under the curve on the training data.

To estimate the AHI, a linear regression model was trained on the following three features: (1) the number of detected events, (2) the total duration of detected events longer than 9 seconds divided by sleep duration, and (3) sleep duration.

The performance of the 3D-CNN was compared against another approach developed by our group, which previously demonstrated state-of-the-art performance in noncontact vision-based estimation of the AHI [25]. A brief overview of this baseline approach is presented here. To extract respiratory-related motion, movements of 768 uniformly scattered points in the video frames were extracted using a sparse optical flow. Principal component analysis (PCA) was applied on the extracted point trajectories over 30-second sliding windows with a stride of 1 second to compute the predominant movements, which were associated with breathing during sleep [28]. This approach was previously validated by Zhu et al [29] and was shown to accurately track breathing rate in overnight infrared videos. To identify respiratory events from the respiratory-related motion, three features were extracted, including the respiratory rate, average power of respiratory movement, and total displacement of tracked points. Compared to normal breathing, the respiratory rate drops during respiratory events. To extract the respiratory rate, the energy of extracted respiratory movements was calculated using fast Fourier transform with a window of 10 seconds. The frequency associated with the highest energy was then considered as the respiratory rate. The second feature was the average power of respiratory movement, which decreases during a respiratory event. This feature was computed as the mean of absolute squares of respiratory displacement within a 10-second window. The last feature was total displacement, which indicates nonrespiratory movement (eg, arousals), and was determined by the summation of all of the raw optical flow movements (before applying PCA). Using these 3 features, a random forest binary classifier with 50 trees was trained to estimate sleep apnea events (apneas and hypopneas). Finally, to estimate the AHI, a linear regression model was trained using 2 features: (1) the number of predicted sleep apnea events normalized by the estimated events’ duration and (2) the estimated events’ duration normalized by the total sleep duration obtained from the total recording time.

Detecting Positional vs Nonpositional Sleep Apnea

For sleep position detection, a previously developed algorithm [26] was used. This method estimates body position (supine vs lateral) from a video frame using a CNN. Sample supine and lateral images are shown in Figure 4. This position detector was applied to the first video frame of each video. After each large movement (detected by thresholding the total displacement of tracked featured points extracted by optical flow over 1 second), the detector was used again to estimate the new sleeping position. As a result, a body position (supine vs lateral) was assigned to each video frame during the entire sleeping period. Once respiratory events and their associated sleep positions were detected, 6 features were calculated per person: (1) number of detected events in supine position, (2) number of detected events in lateral position, (3) total recording time in supine position, (4) total recording time in lateral position, (5) supine AHI, and (6) lateral AHI. These features were then used to train a binary random forest classifier with three trees to distinguish between positional and nonpositional sleep apnea patients.

Figure 4. Sample supine (left) and lateral (right) frames.
View this figure

Validation

Leave-one-person-out cross-validation was used to evaluate the performance of AHI estimation as well as the performance of positional vs nonpositional sleep apnea detection algorithms. Bland-Altman plots and Spearman correlation coefficients were used to evaluate the performance of AHI estimation. Since an AHI of 15 is commonly used as a threshold for screening sleep apnea [30], the algorithm performance on classifying subjects as having sleep apnea was evaluated based on the threshold of AHI=15. Confusion matrices, accuracy, precision, recall, and F1-score measures were used to assess classification performance. The same measures were used to assess the performance of positional vs nonpositional sleep apnea classification.


Demographic information of the 41 individuals (26 men and 15 women) recruited for this study is shown in Table 1. There were 20 participants with positional sleep apnea, 15 participants with nonpositional sleep apnea, and 6 participants that only slept in one position and as such the apnea could not be identified as either positional or nonpositional.

Table 1. Participants’ demographic features for apnea-hypopnea index (AHI) estimation (N=41).a
CharacteristicsValue, mean (SD)
Age (years)53 (13)
BMI (kg/m2)30 (7)
Sleep duration (hours)5 (1)
Number of changes in body position9 (6)
Sleep efficiency (%)75 (18)
REMb sleep percentage (%)15 (7)
Mean wake heart rate (bpmc)68 (16)
Mean REM heart rate (bpm)67 (16)
Minimum SaO2d82 (9)
Mean SaO294 (3)
AHI (events/hour)27 (31)
Supine AHI (events/hour)41 (39)
Lateral AHI (events/hour)21 (34)

aParticipants’ information was obtained from the sleep reports of the overnight sleep study annotated by sleep technicians.

bREM: rapid eye movement.

cbpm: beats per minute.

dSaO2: arterial oxygen saturation.

The threshold used in this study for detecting position changes and ignoring the small movements (eg, breathing or pulse) was empirically set to 20,000 pixels. The total displacement was calculated by summing the displacement of all optical flow feature points [28] over 1 second and was checked against this threshold.

To evaluate the performance of AHI detection, Figure 5 and Figure 6 show the scatterplots and Bland-Altman plots between the estimated AHI and PSG-based AHI for both the 3D-CNN model and the baseline model (Zhu et al [25]).

Figure 5. Scatterplots of polysomnography (PSG) apnea-hypopnea index (AHI) vs estimated AHI values. The blue and red lines indicate fitted and unity lines, respectively. CNN: convolutional neural network.
View this figure
Figure 6. Bland-Altman plots of apnea-hypopnea index (AHI) estimation algorithms. PSG: polysomnography; Est: estimated; CNN: convolutional neural network.
View this figure

The Spearman correlation coefficients (ρ) for AHI estimation were 0.55 and 0.79 for the baseline and 3D-CNN approach, respectively (P<.001 in both cases). In addition, the Bland-Altman plot indicated that our method outperformed the baseline according to the smaller mean (0.3 vs 8.9) and tighter 95% limits of agreement (ie, a smaller value for 1.96 of the standard deviation: 40.9 vs 56.5). Confusion matrices and the performance measures for identifying sleep apnea patients based on the AHI=15 threshold are shown in Figure 7 and in Table 2, respectively. The 3D-CNN approach obtained 83% accuracy and an F1-score of 86%, outperforming the baseline approach, which obtained an accuracy of 73% and an F1-score of 74%.

Figure 7. Confusion matrices for screening patients with sleep apnea based on the apnea-hypopnea index threshold of 15. CNN: convolutional neural network.
View this figure
Table 2. Performance of models on screening patients with sleep apnea.
MethodAccuracyPrecisionRecallF1-score
3D-CNNa82.9377.7895.4585.71
Baseline (Zhu et al [25])73.1776.1972.7374.42

aCNN: convolutional neural network.

The position detection algorithm estimated the body position with 83% accuracy, an F1-score of 83%, 77% precision, and 91% recall. The performance of the combination of the position detection algorithm with AHI detection on patients with positional sleep is shown in Figure 8. The 3D-CNN model classified 13 out of 20 patients with positional sleep apnea correctly. Performance measures for detecting positional vs nonpositional sleep apnea are presented in Table 3.

Figure 8. Confusion matrix for identifying positional sleep apnea. CNN: convolutional neural network.
View this figure
Table 3. Performance of the models in detecting positional vs nonpositional sleep apnea.
MethodAccuracyPrecisionRecallF1-score
3D-CNNa65.7172.2265.0068.42
Baseline (Zhu et al [25])34.2942.1140.0041.03

aCNN: convolutional neural network.


Principal Findings

The main contributions of this study are: (1) the development and experimental validation of a new noncontact approach to estimate AHI, and (2) application of this method to automatically identify individuals with positional sleep apnea. The newly developed 3D-CNN–based method outperformed the baseline model in estimating the AHI in infrared video data. However, it was ~4 times slower than the baseline algorithm. Nevertheless, the new model could still process 5 hours of sleep data in ~20 hours. Through combining estimated sleeping position information with estimated AHI, this is the first noncontact method that can identify a positional sleep apnea patient.

The developed algorithm achieved comparable performance to existing contact methods (eg, those using a single wearable sensor or a sensor placed under the mattress). For example, Hafezi et al [15] analyzed tracheal movements captured by an accelerometer to estimate AHI and to identify patients with sleep apnea. They reported a Spearman correlation of 0.86 between estimated and ground-truth (PSG) AHI values, and accuracy and F1-score values of 84% and 82%, respectively, in detecting individuals with AHI≥15. As such, they achieved a higher correlation coefficient (0.86 vs 0.79) but a lower F1-score (82% vs 86%) than our noncontact approach. An advantage of using a noncontact method over contact-based approaches is ease of use and convenience. Davidovich et al [23] used a piezo-electric sensor under a mattress to estimate the AHI. They obtained an R2 value of 0.86 for AHI estimation, and accuracy and F1-score values of 88% and 84%, respectively, in identifying individuals with AHI≥15. Using a camera has the potential to result in a more accessible assessment technology, as it can be implemented in the form of a tablet or mobile phone app.

Limitations

Our study has some limitations. One limitation is the failure of the event detection algorithm when the participant moved out of the field of view of the camera or when the room lighting condition suddenly changed. Another limitation is the small number of participants (N=41). The algorithm was validated via leave-one-person-out cross-validation. Future work should examine the generalizability of these models to data collected in new environments.

Conclusion and Future Work

This study applied machine learning and computer vision approaches to develop a CNN-based method to detect respiratory events in different sleeping positions from data collected via an infrared camera. This method was validated on data from 41 participants to estimate AHI and to identify patients with positional sleep apnea.

This model could be used toward the development of affordable and easy-to-use technologies for screening sleep apnea at home (eg, in the form of a tablet or smartphone app). Such a system could help physicians in choosing suitable treatments for sleep apnea patients. Ultimately, improved treatment will reduce the consequences of untreated sleep apnea such as car accidents, heart disease, diabetes, and high blood pressure.

Acknowledgments

This work was supported in part by FedDev Ontario; BresoTec Inc; Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grant (RGPIN-2020-04184); AMS Healthcare Fellowship in Compassion and Artificial Intelligence; and the Toronto Rehabilitation Institute, University Health Network.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Architecture of a 3D convolutional neural network used to detect apneas.

DOCX File , 15 KB

  1. American Academy of Sleep Medicine Task Force. Sleep-related breathing disorders in adults: recommendations for syndrome definition and measurement techniques in clinical research. Sleep 1999 Aug 01;22(5):667-689. [Medline]
  2. Kim NH. Obstructive sleep apnea and abnormal glucose metabolism. Diabetes Metab J 2012 Aug;36(4):268-272 [FREE Full text] [CrossRef] [Medline]
  3. Joosten SA, O'Driscoll DM, Berger PJ, Hamilton GS. Supine position related obstructive sleep apnea in adults: pathogenesis and treatment. Sleep Med Rev 2014 Mar;18(1):7-17. [CrossRef] [Medline]
  4. Oksenberg A, Silverberg DS, Arons E, Radwan H. Positional vs nonpositional obstructive sleep apnea patients: anthropomorphic, nocturnal polysomnographic, and multiple sleep latency test data. Chest 1997 Sep;112(3):629-639. [CrossRef] [Medline]
  5. Permut I, Diaz-Abad M, Chatila W, Crocetti J, Gaughan JP, D'Alonzo GE, et al. Comparison of positional therapy to CPAP in patients with positional obstructive sleep apnea. J Clin Sleep Med 2010 Jun 15;06(03):238-243. [CrossRef]
  6. Flemons WW, Douglas NJ, Kuna ST, Rodenstein DO, Wheatley J. Access to diagnosis and treatment of patients with suspected sleep apnea. Am J Respir Crit Care Med 2004 Mar 15;169(6):668-672. [CrossRef] [Medline]
  7. Young T, Evans L, Finn L, Palta M. Estimation of the clinically diagnosed proportion of sleep apnea syndrome in middle-aged men and women. Sleep 1997 Sep;20(9):705-706. [CrossRef] [Medline]
  8. Potts KJ, Butterfield DT, Sims P, Henderson M, Shames CB. Cost savings associated with an education campaign on the diagnosis and management of sleep-disordered breathing: a retrospective, claims-based US study. Popul Health Manag 2013 Mar;16(1):7-13. [CrossRef] [Medline]
  9. Diamanti C, Manali E, Ginieri-Coccossis M, Vougas K, Cholidou K, Markozannes E, et al. Depression, physical activity, energy consumption, and quality of life in OSA patients before and after CPAP treatment. Sleep Breath 2013 Dec 6;17(4):1159-1168. [CrossRef] [Medline]
  10. Bratton DJ, Gaisl T, Wons AM, Kohler M. CPAP vs mandibular advancement devices and blood pressure in patients with obstructive sleep apnea: a systematic review and meta-analysis. JAMA 2015 Dec 01;314(21):2280-2293. [CrossRef] [Medline]
  11. Merchant M, Farid-Moyer M, Zobnin Y, Kuznetcov A, Savitski A, Askeland J, et al. Clinical validation of a diagnostic patch for the detection of sleep apnea. In: Sleep Medicine. 2017 Dec Presented at: SLEEP 2017: 31st Annual Meeting of the Associated Professional Sleep Societies; June 3-7, 2017; Boston, Massachusetts p. e221. [CrossRef]
  12. Ayas N. Assessment of a wrist-worn device in the detection of obstructive sleep apnea. Sleep Medicine 2003 Sep;4(5):435-442. [CrossRef]
  13. Varon C, Caicedo A, Testelmans D, Buyse B, Van Huffel S. A novel algorithm for the automatic detection of sleep apnea from single-lead ECG. IEEE Trans Biomed Eng 2015 Sep;62(9):2269-2278. [CrossRef] [Medline]
  14. Nakano H, Hayashi M, Ohshima E, Nishikata N, Shinohara T. Validation of a new system of tracheal sound analysis for the diagnosis of sleep apnea-hypopnea syndrome. Sleep 2004 Aug 01;27(5):951-957. [CrossRef] [Medline]
  15. Hafezi M, Montazeri N, Saha S, Zhu K, Gavrilovic B, Yadollahi A, et al. Sleep apnea severity estimation from tracheal movements using a deep learning model. IEEE Access 2020;8:22641-22649. [CrossRef]
  16. Yadollahi A, Giannouli E, Moussavi Z. Sleep apnea monitoring and diagnosis based on pulse oximetry and tracheal sound signals. Med Biol Eng Comput 2010 Nov 24;48(11):1087-1097. [CrossRef] [Medline]
  17. Saha S, Kabir M, Montazeri Ghahjaverestan N, Hafezi M, Gavrilovic B, Zhu K, et al. Portable diagnosis of sleep apnea with the validation of individual event detection. Sleep Med 2020 May;69:51-57. [CrossRef] [Medline]
  18. Yadollahi A, Moussavi Z. Acoustic obstructive sleep apnea detection. Annu Int Conf IEEE Eng Med Biol Soc 2009;2009:7110-7113. [CrossRef] [Medline]
  19. Lévy P, Pépin JL, Deschaux-Blanc C, Paramelle B, Brambilla C. Accuracy of oximetry for detection of respiratory disturbances in sleep apnea syndrome. Chest 1996 Mar;109(2):395-399. [CrossRef] [Medline]
  20. Akbarian S, Montazeri Ghahjaverestan N, Yadollahi A, Taati B. Distinguishing obstructive versus central apneas in infrared video of sleep using deep learning: validation study. J Med Internet Res 2020 May 22;22(5):e17252 [FREE Full text] [CrossRef] [Medline]
  21. Jakkaew P, Onoye T. Non-contact respiration monitoring and body movements detection for sleep using thermal imaging. Sensors (Basel) 2020 Nov 05;20(21):6307 [FREE Full text] [CrossRef] [Medline]
  22. Deng F, Dong J, Wang X, Fang Y, Liu Y, Yu Z, et al. Design and implementation of a noncontact sleep monitoring system using infrared cameras and motion sensor. IEEE Trans Instrum Meas 2018 Jul;67(7):1555-1563. [CrossRef]
  23. Davidovich MLY, Karasik R, Tal A, Shinar Z. Sleep apnea screening with a contact-free under-the-mattress sensor. 216 Presented at: Computing in Cardiology Conference (CinC); 2016; Vancouver, BC p. 849-852. [CrossRef]
  24. Nandakumar R, Gollakota S, Watson N. Contactless sleep apnea detection on smartphones. GetMobile: Mobile Comp and Comm 2015 Dec 23;19(3):22-24. [CrossRef]
  25. Zhu K, Yadollahi A, Taati B. Non-contact apnea-hypopnea index estimation using near infrared video. In: Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul Presented at: International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2019; Berlin, Germany p. 792-795. [CrossRef]
  26. Akbarian S, Delfi G, Zhu K, Yadollahi A, Taati B. Automated non-contact detection of head and body positions during sleep. IEEE Access 2019;7:72826-72834. [CrossRef]
  27. Ilg E, Mayer N, Saikia T, Dosovitskiy A, Keuper M, Brox T. FlowNet 2.0: Evolution of optical flow estimation with deep networks. 2017 Presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017; Hawaii p. 1647-1655   URL: http://lmb.informatik.uni-freiburg.de/Publications/2017/IMSKDB17 [CrossRef]
  28. Li MH, Yadollahi A, Taati B. Noncontact vision-based cardiopulmonary monitoring in different sleeping positions. IEEE J Biomed Health Inform 2017 Sep;21(5):1367-1375. [CrossRef]
  29. Zhu K, Li M, Akbarian S, Hafezi M, Yadollahi A, Taati B. Vision-based heart and respiratory rate monitoring during sleep – a validation study for the population at risk of sleep apnea. IEEE J Transl Eng Health Med 2019;7:1-8. [CrossRef]
  30. Kapur VK, Auckley DH, Chowdhuri S, Kuhlmann DC, Mehra R, Ramar K, et al. Clinical practice guideline for diagnostic testing for adult obstructive sleep apnea: an American Academy of Sleep Medicine Clinical Practice Guideline. J Clin Sleep Med 2017 Mar 15;13(3):479-504. [CrossRef] [Medline]


AHI: apnea-hypopnea index
CNN: convolutional neural network
PCA: principal component analysis
PSG: polysomnography


Edited by R Kukafka, G Eysenbach; submitted 15.12.20; peer-reviewed by S Guness, W Shadid, K Pandl; comments to author 20.03.21; revised version received 13.04.21; accepted 10.09.21; published 01.11.21

Copyright

©Sina Akbarian, Nasim Montazeri Ghahjaverestan, Azadeh Yadollahi, Babak Taati. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 01.11.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.