Assessment of the Feasibility of Using Noninvasive Wearable Biometric Monitoring Sensors to Detect Influenza and the Common Cold Before Symptom Onset | Infectious Diseases | JAMA Network Open | JAMA Network
[Skip to Navigation]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 35.153.100.128. Please contact the publisher to request reinstatement.
1.
Clayville  LR.  Influenza update: a review of currently available vaccines.   P T. 2011;36(10):659-684.PubMedGoogle Scholar
2.
Worrall  G.  Common cold.   Can Fam Physician. 2011;57(11):1289-1290.PubMedGoogle Scholar
3.
Jacobs  SE, Lamson  DM, St George  K, Walsh  TJ.  Human rhinoviruses.   Clin Microbiol Rev. 2013;26(1):135-162. doi:10.1128/CMR.00077-12 PubMedGoogle Scholar
4.
Centers for Disease Control and Prevention. Manual for the surveillance of vaccine-preventable diseases: chapter 6: influenza. March 29, 2019. Accessed September 20, 2020. https://www.cdc.gov/vaccines/pubs/surv-manual/chpt06-influenza.html
5.
Fraser  C, Donnelly  CA, Cauchemez  S,  et al; WHO Rapid Pandemic Assessment Collaboration.  Pandemic potential of a strain of influenza A (H1N1): early findings.   Science. 2009;324(5934):1557-1561. doi:10.1126/science.1176062 PubMedGoogle Scholar
6.
Furukawa  NW, Brooks  JT, Sobel  J.  Evidence supporting transmission of severe acute respiratory syndrome coronavirus 2 while presymptomatic or asymptomatic.   Emerg Infect Dis. 2020;26(7):e201595. doi:10.3201/eid2607.201595PubMedGoogle Scholar
7.
She  X, Zhai  Y, Henao  R,  et al.  Adaptive multi-channel event segmentation and feature extraction for monitoring health outcomes.   IEEE Trans Biomed Eng. 2021;68(8):2377-2388. doi:10.1109/TBME.2020.3038652PubMedGoogle Scholar
8.
Li  X, Dunn  J, Salins  D,  et al.  Digital health: tracking physiomes and activity using wearable biosensors reveals useful health-related information.   PLoS Biol. 2017;15(1):e2001402. doi:10.1371/journal.pbio.2001402 PubMedGoogle Scholar
9.
Mishra  T, Wang  M, Metwally  AA,  et al.  Pre-symptomatic detection of COVID-19 from smartwatch data.   Nat Biomed Eng. 2020;4(12):1208-1220. doi:10.1038/s41551-020-00640-6PubMedGoogle Scholar
10.
Munos  B, Baker  PC, Bot  BM,  et al.  Mobile health: the power of wearables, sensors, and apps to transform clinical trials.   Ann N Y Acad Sci. 2016;1375(1):3-18. doi:10.1111/nyas.13117 PubMedGoogle Scholar
11.
Bent B, Goldstein BA, Kibbe WA, Dunn JP. Investigating sources of inaccuracy in wearable optical heart rate sensors.  NPJ Digi Med. 2020;3:18. doi:10.1038/s41746-020-0226-6PubMed
12.
Bent  B, Wang  K, Grzesiak  E,  et al.  The digital biomarker discovery pipeline: An open-source software platform for the development of digital biomarkers using mHealth and wearables data.   J Clin Transl Sci. 2020;5(1):e19. doi:10.1017/cts.2020.511PubMedGoogle Scholar
13.
Dunn  J, Runge  R, Snyder  M.  Wearables and the medical revolution.   Per Med. 2018;15(5):429-448. doi:10.2217/pme-2018-0044 PubMedGoogle Scholar
14.
Witt  D, Kellogg  R, Snyder  M, Dunn  J.  Windows into human health through wearables data analytics.   Curr Opin Biomed Eng. 2019;9:28-46. doi:10.1016/j.cobme.2019.01.001 PubMedGoogle Scholar
15.
Nguyen  VQ, Abe  S, Sun  G,  et al.  Rapid screening for influenza using a multivariable logistic regression model to save labor at a clinic in Iwaki, Fukushima, Japan.   Am J Infect Control. 2014;42(5):551-553. doi:10.1016/j.ajic.2014.01.019 PubMedGoogle Scholar
16.
Matsui  T, Hakozaki  Y, Suzuki  S,  et al.  A novel screening method for influenza patients using a newly developed non-contact screening system.   J Infect. 2010;60(4):271-277. doi:10.1016/j.jinf.2010.01.005 PubMedGoogle Scholar
17.
Mattéi  J, Teyssier  G, Pichot  V,  et al.  Autonomic dysfunction in 2009 pandemic influenza A (H1N1) virus–related infection: a pediatric comparative study.   Auton Neurosci. 2011;162(1-2):77-83. doi:10.1016/j.autneu.2011.03.003 PubMedGoogle Scholar
18.
Radin  JM, Wineinger  NE, Topol  EJ, Steinhubl  SR.  Harnessing wearable device data to improve state-level real-time surveillance of influenza-like illness in the USA: a population-based study.   Lancet Digit Health. 2020;2(2):e85-e93. doi:10.1016/S2589-7500(19)30222-5 PubMedGoogle Scholar
19.
Karjalainen  J, Viitasalo  M.  Fever and cardiac rhythm.   Arch Intern Med. 1986;146(6):1169-1171. doi:10.1001/archinte.1986.00360180179026 PubMedGoogle Scholar
20.
Zhu  G, Li  J, Meng  Z,  et al.  Learning from large-scale wearable device data for predicting epidemics trend of COVID-19.   Discrete Dyn Nat Soc. 2020;2020:1-8. doi:10.1155/2020/6664405 Google Scholar
21.
Shapiro  A, Marinsek  N, Clay  I,  et al. Characterizing COVID-19 and influenza in the real world via person-generated health data.  Patterns. 2021;2(1):100188. doi:10.1016/j.patter.2020.100188
22.
Paltiel  AD, Zheng  A, Walensky  RP.  Assessment of SARS-CoV-2 screening strategies to permit the safe reopening of college campuses in the United States.   JAMA Netw Open. 2020;3(7):e2016818. doi:10.1001/jamanetworkopen.2020.16818 PubMedGoogle Scholar
23.
Natarajan  A, Su  HW, Heneghan  C.  Assessment of physiological signs associated with COVID-19 measured using wearable devices.   NPJ Digit Med. 2020;3(1):156. doi:10.1038/s41746-020-00363-7 PubMedGoogle Scholar
24.
Ahmad  S, Tejuja  A, Newman  KD, Zarychanski  R, Seely  AJ.  Clinical review: a review and analysis of heart rate variability and the diagnosis and prognosis of infection.   Crit Care. 2009;13(6):232. doi:10.1186/cc8132 PubMedGoogle Scholar
25.
Wee  BYH, Lee  JH, Mok  YH, Chong  SL.  A narrative review of heart rate and variability in sepsis.   Ann Transl Med. 2020;8(12):768. doi:10.21037/atm-20-148 PubMedGoogle Scholar
26.
Quer  G, Radin  JM, Gadaleta  M,  et al.  Wearable sensor data and self-reported symptoms for COVID-19 detection.   Nat Med. 2021;27(1):73-77. doi:10.1038/s41591-020-1123-x PubMedGoogle Scholar
27.
Hirten  RP, Danieletto  M, Tomalin  L,  et al.  Use of physiological data from a wearable device to identify SARS-CoV-2 infection and symptoms and predict COVID-19 diagnosis: observational study.   J Med Internet Res. 2021;23(2):e26107. doi:10.2196/26107 PubMedGoogle Scholar
28.
Selinheimo  S, Vasankari  T, Jokela  M,  et al.  The association of psychological factors and healthcare use with the discrepancy between subjective and objective respiratory-health complaints in the general population.   Psychol Med. 2019;49(1):121-131. doi:10.1017/S0033291718000582PubMedGoogle Scholar
29.
Schmutz  A, Jacques  J, Bouveyron  C, Cheze  L, Martin  P. Clustering multivariate functional data in group-specific functional subspaces.  Comput Stat. 2020;35;1101-1131. doi:10.1007/s00180-020-00958-4
30.
Bouveyron  C, Jacques  J.  Model-based clustering of time series in group-specific functional subspaces.   Adv Data Anal Classif. 2011;5(4):281-300. doi:10.1007/s11634-011-0095-6 Google Scholar
31.
Jackson  GG, Dowling  HF.  Transmission of the common cold to volunteers under controlled conditions, IV: specific immunity to the common cold.   J Clin Invest. 1959;38(5):762-769. doi:10.1172/JCI103857PubMedGoogle Scholar
32.
Woods  CW, McClain  MT, Chen  M,  et al.  A host transcriptional signature for presymptomatic detection of infection in humans exposed to influenza H1N1 or H3N2.   PLoS One. 2013;8(1):e52198. doi:10.1371/journal.pone.0052198PubMedGoogle Scholar
33.
Zaas  AK, Chen  M, Varkey  J,  et al.  Gene expression signatures diagnose influenza and other symptomatic respiratory viral infections in humans.   Cell Host Microbe. 2009;6(3):207-217. doi:10.1016/j.chom.2009.07.006 PubMedGoogle Scholar
34.
Turner  RB.  Ineffectiveness of intranasal zinc gluconate for prevention of experimental rhinovirus colds.   Clin Infect Dis. 2001;33(11):1865-1870. doi:10.1086/324347 PubMedGoogle Scholar
35.
Shaffer  F, Ginsberg  JP.  An overview of heart rate variability metrics and norms.   Front Public Health. 2017;5:258. doi:10.3389/fpubh.2017.00258 PubMedGoogle Scholar
36.
Dunn  J, Kidzinski  L, Runge  R,  et al.  Wearable sensors enable personalized predictions of clinical laboratory measurements.   Nat Med. 2021;27(6):1105-1112. doi:10.1038/s41591-021-01339-0 PubMedGoogle Scholar
37.
mlxtend. Exhaustive feature selector. Accessed September 20, 2020. http://rasbt.github.io/mlxtend/user_guide/feature_selection/ExhaustiveFeatureSelector/
38.
scikit learn, 1.11.2: Forests of randomized trees. Accessed September 20, 2020. https://scikit-learn.org/stable/modules/ensemble.html#forest
39.
Urbanowicz  RJ, Moore  JH.  ExSTraCS 2.0: description and evaluation of a scalable learning classifier system.   Evol Intell. 2015;8(2):89-116. doi:10.1007/s12065-015-0128-8 PubMedGoogle Scholar
40.
Šimundić  AM.  Measures of diagnostic accuracy: basic definitions.   EJIFCC. 2009;19(4):203-211.PubMedGoogle Scholar
41.
Goldenberg  A, Nestor  B, Hunter  J,  et al.  Dear watch, should I get a COVID test? designing deployable machine learning for wearables.  Research Square. Preprint posted online May 19, 2021. doi:10.21203/rs.3.rs-505984/v1
42.
Miller  DJ, Capodilupo  JV, Lastella  M,  et al.  Analyzing changes in respiratory rate to predict the risk of COVID-19 infection.   PLoS One. 2020;15(12):e0243693. doi:10.1371/journal.pone.0243693PubMedGoogle Scholar
43.
Menni  C, Valdes  AM, Freidin  MB,  et al.  Real-time tracking of self-reported symptoms to predict potential COVID-19.   Nat Med. 2020;26(7):1037-1040. doi:10.1038/s41591-020-0916-2 PubMedGoogle Scholar
44.
Smarr  BL, Aschbacher  K, Fisher  SM,  et al.  Feasibility of continuous fever monitoring using wearable devices.   Sci Rep. 2020;10(1):21640. doi:10.1038/s41598-020-78355-6 PubMedGoogle Scholar
45.
Shapiro  A, Marinsek  N, Clay  I,  et al.  Characterizing COVID-19 and influenza illnesses in the real world via person-generated health data.   Patterns (N Y). 2020;2(1):100188. doi:10.1016/j.patter.2020.100188 PubMedGoogle Scholar
46.
Vogels EA. About one-in-five Americans use a smart watch or fitness tracker. Pew Research Center. Accessed July 3, 2020. https://www.pewresearch.org/fact-tank/2020/01/09/about-one-in-five-americans-use-a-smart-watch-or-fitness-tracker/
47.
Goldsack  JC, Coravos  A, Bakker  JP,  et al.  Verification, analytical validation, and clinical validation (V3): the foundation of determining fit-for-purpose for Biometric Monitoring Technologies (BioMeTs).   NPJ Digit Med. 2020;3(1):55. doi:10.1038/s41746-020-0260-4 PubMedGoogle Scholar
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    Views 9,892
    Citations 0
    Original Investigation
    Health Informatics
    September 29, 2021

    Assessment of the Feasibility of Using Noninvasive Wearable Biometric Monitoring Sensors to Detect Influenza and the Common Cold Before Symptom Onset

    Author Affiliations
    • 1Biomedical Engineering Department, Duke University, Durham, North Carolina
    • 2Duke Center for Applied Genomics and Precision Medicine, Duke University Medical Center, Durham, North Carolina
    • 3Durham Veterans Affairs Medical Center, Durham, North Carolina
    • 4Department of Medicine, Duke Global Health Institute, Durham, North Carolina
    • 5Department of Infectious Disease, Imperial College London, London, United Kingdom
    • 6Department of Pediatrics, University of Virginia School of Medicine, Charlottesville
    • 7Department of Psychiatry, Duke University School of Medicine, Durham, North Carolina
    • 8Department of Medicine, Duke University School of Medicine, Durham, North Carolina
    • 9Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor
    • 10Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
    JAMA Netw Open. 2021;4(9):e2128534. doi:10.1001/jamanetworkopen.2021.28534
    Key Points

    Question  Can noninvasive, wrist-worn wearable devices detect acute viral respiratory infection and predict infection severity before symptom onset?

    Findings  In a cohort study of 31 participants inoculated with H1N1 and 18 participants with rhinovirus, infection detection and severity prediction models trained using data on wearable devices were able to distinguish between infection and noninfection with 92% accuracy for H1N1 and 88% accuracy for rhinovirus and were able to distinguish between mild and moderate infection 24 hours prior to symptom onset with 90% accuracy for H1N1 and 89% accuracy for rhinovirus.

    Meaning  This study suggests that the use of wearable devices to identify individuals with presymptomatic acute viral respiratory infection is feasible; because wearable devices are common in the general population, using them for infection screening may help limit the spread of contagion.

    Abstract

    Importance  Currently, there are no presymptomatic screening methods to identify individuals infected with a respiratory virus to prevent disease spread and to predict their trajectory for resource allocation.

    Objective  To evaluate the feasibility of using noninvasive, wrist-worn wearable biometric monitoring sensors to detect presymptomatic viral infection after exposure and predict infection severity in patients exposed to H1N1 influenza or human rhinovirus.

    Design, Setting, and Participants  The cohort H1N1 viral challenge study was conducted during 2018; data were collected from September 11, 2017, to May 4, 2018. The cohort rhinovirus challenge study was conducted during 2015; data were collected from September 14 to 21, 2015. A total of 39 adult participants were recruited for the H1N1 challenge study, and 24 adult participants were recruited for the rhinovirus challenge study. Exclusion criteria for both challenges included chronic respiratory illness and high levels of serum antibodies. Participants in the H1N1 challenge study were isolated in a clinic for a minimum of 8 days after inoculation. The rhinovirus challenge took place on a college campus, and participants were not isolated.

    Exposures  Participants in the H1N1 challenge study were inoculated via intranasal drops of diluted influenza A/California/03/09 (H1N1) virus with a mean count of 106 using the median tissue culture infectious dose (TCID50) assay. Participants in the rhinovirus challenge study were inoculated via intranasal drops of diluted human rhinovirus strain type 16 with a count of 100 using the TCID50 assay.

    Main Outcomes and Measures  The primary outcome measures included cross-validated performance metrics of random forest models to screen for presymptomatic infection and predict infection severity, including accuracy, precision, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AUC).

    Results  A total of 31 participants with H1N1 (24 men [77.4%]; mean [SD] age, 34.7 [12.3] years) and 18 participants with rhinovirus (11 men [61.1%]; mean [SD] age, 21.7 [3.1] years) were included in the analysis after data preprocessing. Separate H1N1 and rhinovirus detection models, using only data on wearble devices as input, were able to distinguish between infection and noninfection with accuracies of up to 92% for H1N1 (90% precision, 90% sensitivity, 93% specificity, and 90% F1 score, 0.85 [95% CI, 0.70-1.00] AUC) and 88% for rhinovirus (100% precision, 78% sensitivity, 100% specificity, 88% F1 score, and 0.96 [95% CI, 0.85-1.00] AUC). The infection severity prediction model was able to distinguish between mild and moderate infection 24 hours prior to symptom onset with an accuracy of 90% for H1N1 (88% precision, 88% sensitivity, 92% specificity, 88% F1 score, and 0.88 [95% CI, 0.72-1.00] AUC) and 89% for rhinovirus (100% precision, 75% sensitivity, 100% specificity, 86% F1 score, and 0.95 [95% CI, 0.79-1.00] AUC).

    Conclusions and Relevance  This cohort study suggests that the use of a noninvasive, wrist-worn wearable device to predict an individual’s response to viral exposure prior to symptoms is feasible. Harnessing this technology would support early interventions to limit presymptomatic spread of viral respiratory infections, which is timely in the era of COVID-19.

    ×