VEM indicates video electroencephalogram monitoring.
ES indicates epileptic seizure; PNEA, psychogenic nonepileptic attack.
aCalculated using the Fisher exact test (because of the low cell numbers in the convulsive group for ES).
bCalculated using the Pearson χ2 test.
eAppendix. Survey for Video Semiology and Quality Review
eTable 1. Measures of Diagnostic Utility of SV for Convulsive vs Non-Convulsive Events Among the 11 Reviewers Who Assessed >30 Videos
eTable 2. Measure of Diagnostic Utility of SV for ES (A) and PNEA (B)
eTable 3. Measure of Diagnostic Accuracy of Smartphone Videos for ES and PNEA with “Unknowns” Excluded Versus Included
eTable 4. Likelihood and Odds Ratios as Measures of Diagnostic Utility of Smartphone Video for ES and PNEA
eFigure 1. Differences in LOC in Diagnosis from Smartphone Videos for Residents Versus Experts by Diagnostic Accuracy
eFigure 2. Reviewer-Designated Hindrances to Diagnosis from Smartphone Video
eFigure 3. Radar Plot Including Listed Reason for Difficulty with Diagnosis from Smartphone Video by Clinician Type
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Tatum WO, Hirsch LJ, Gelfand MA, et al. Assessment of the Predictive Value of Outpatient Smartphone Videos for Diagnosis of Epileptic Seizures. JAMA Neurol. 2020;77(5):593–600. doi:10.1001/jamaneurol.2019.4785
How accurately does smartphone video–based diagnosis by epileptologists and residents predict final video electroencephalogram diagnosis of paroxysmal neurological events?
This diagnostic study conducted at 8 tertiary care epilepsy centers found that video reviewed by experts predicted a final diagnosis with an accuracy of 89% for epileptic seizures and 86% for psychogenic nonepileptic attacks. The findings also confirmed the ability to perform a secure exchange of smartphone videos among multiple institutions.
This study provides class II evidence demonstrating the high accuracy of smartphone videography and validates its value as an adjunct to routine history and physical examination.
Misdiagnosis of epilepsy is common. Video electroencephalogram provides a definitive diagnosis but is impractical for many patients referred for evaluation of epilepsy.
To evaluate the accuracy of outpatient smartphone videos in epilepsy.
Design, Setting, and Participants
This prospective, masked, diagnostic accuracy study (the OSmartViE study) took place between August 31, 2015, and August 31, 2018, at 8 academic epilepsy centers in the United States and included a convenience sample of 44 nonconsecutive outpatients who volunteered a smartphone video during evaluation and subsequently underwent video electroencephalogram monitoring. Three epileptologists uploaded videos for physicians from the 8 epilepsy centers to review.
Main Outcomes and Measures
Measures of performance (accuracy, sensitivity, specificity, positive predictive value, and negative predictive value) for smartphone video–based diagnosis by experts and trainees (the index test) were compared with those for history and physical examination and video electroencephalogram monitoring (the reference standard).
Forty-four eligible epilepsy clinic outpatients (31 women [70.5%]; mean [range] age, 45.1 [20-82] years) submitted smartphone videos (530 total physician reviews). Final video electroencephalogram diagnoses included 11 epileptic seizures, 30 psychogenic nonepileptic attacks, and 3 physiologic nonepileptic events. Expert interpretation of a smartphone video was accurate in predicting a video electroencephalogram monitoring diagnosis of epileptic seizures 89.1% (95% CI, 84.2%-92.9%) of the time, with a specificity of 93.3% (95% CI, 88.3%-96.6%). Resident responses were less accurate for all metrics involving epileptic seizures and psychogenic nonepileptic attacks, despite greater confidence. Motor signs during events increased accuracy. One-fourth of the smartphone videos were correctly diagnosed by 100% of the reviewing physicians, composed solely of psychogenic attacks. When histories and physical examination results were combined with smartphone videos, correct diagnoses rose from 78.6% to 95.2%. The odds of receiving a correct diagnosis were 5.45 times greater using smartphone video alongside patient history and physical examination results than with history and physical examination alone (95% CI, 1.01-54.3; P = .02).
Conclusions and Relevance
Outpatient smartphone video review by experts has predictive and additive value for diagnosing epileptic seizures. Smartphone videos may reliably aid psychogenic nonepileptic attacks diagnosis for some people.
Epilepsy has a substantial global burden of disease.1 Diagnosis is made clinically based on a historical recount of witnessed events and a laboratory assessment. Differential diagnosis for seizures is broad. Even seasoned clinicians can be misled when individuals lack medical knowledge or pertinent terminology to accurately represent witnessed seizure behavior.2 Video electroencephalogram (EEG) monitoring (VEM) is recommended when there is diagnostic uncertainty in classifying seizure type or epilepsy syndrome.3 Video EEG monitoring provides objective evidence for definitive diagnosis4; however, 20% to 30% of patients undergoing inpatient VEM have nonepileptic conditions.5-8 Additionally, VEM may not be practical for some patients because of relative infrequency of events or occurrence only during certain activities or in unique settings.3,4 Geographic limitations, transportation constraints, and insurance coverage may also restrict access.2,4 Therefore, better methods to achieve accurate diagnoses are essential.
Applications and online communities have recently explored smartphones as potential health care tools to manage chronic medical conditions.9,10 While VEM is the criterion standard for seizure diagnosis, web-based smartphone use has become a popular means to augment clinical practice involving seizure reporting by people with epilepsy.10 Seizure Tracker (Seizure Tracker LLC), My Seizure Diary (Epilepsy Foundation), and EpiDiary (Irody Inc) are examples of available smartphone applications that allow self-collection of information, in turn aiding physicians in clinical decision-making and optimal management.11,12 We hypothesize that outpatient smartphone videos have good predictive value in adults referred for evaluation of epilepsy. We aimed to assess the accuracy of a smartphone video to predict a final VEM diagnosis of epilepsy.
A prospective, multicenter masked clinical trial at 8 academic epilepsy centers (all certified as level IV by the National Association of Epilepsy Centers) evaluated adult outpatient smartphone videos between August 15, 2015, and August 31, 2018. Centers were selected based on diverse geography and excellence in epilepsy specialization. Study participants were outpatients referred with events that could have or could not have been epileptic seizures for elective evaluation. Diagnostic VEM was recommended based on clinical necessity at the treating institution and repeated from prior performance only when there was further delineation needed. A final diagnosis was rendered after VEM by an expert who was board certified in epilepsy and clinical neurophysiology. A convenience sample included self-selected patients who volunteered a smartphone video following initial consultation and prior to VEM. Three surveys were completed at different times: following a standard history and physical examination (HP) (treating physician), following smartphone video review (all physicians), and on reaching a final diagnosis after VEM (treating physician). History and physical examination results and VEM were used as a reference standard to assess the index test (smartphone video–based diagnosis). Reviewing physicians independently interpreted web-based videos and completed a diagnostic survey (eFigure 1 in the Supplement). Survey completion with the final diagnosis, duration of VEM, and video quality was completed separately using VEM as the criterion standard. The primary goal was to identify and stratify important features of outpatient smartphone videography that apply to diagnostic interpretation in patients suspected of having epilepsy. A forced choice for a smartphone diagnosis of (1) epileptic seizures (ES), (2) psychogenic nonepileptic attack, (3) physiologic nonepileptic events, or (4) unknown diagnosis with a corresponding level of certainty (0-10 scale) was assigned. Video interpretation and survey completion were performed by 10 epilepsy experts and 9 senior neurology residents without plans for epilepsy or sleep medicine fellowship at the same center who were masked to the video-EEG monitoring final diagnosis. The decision to include an individual resident was made by the leadership at each program. Multiple responses were possible in response to specific questions. Data sharing was performed by a Health Insurance Portability and Accountability Act (HIPAA)–protected data transfer platform using a free web-based software application (CaptureProof). Computer-based surveys using proprietary software were accessed, completed, transferred, and securely stored after completion. The protocol was approved by the institutional review board of each participating center. All patients provided oral informed consent. Video uploading was performed by patients (or someone of their choice assisting them) using either Android (Open Handset Alliance) or IOS (Apple) formats. This study is reported in accordance with the Standards for Reporting of Diagnostic Accuracy guideline criteria.13
The primary study aim was to assess the diagnostic accuracy of smartphone videos reviewed by masked interpreters to correctly identify VEM diagnoses of epileptic seizures. Secondary aims involved determining the diagnostic accuracy of smartphone videos to correctly identify VEM diagnoses of psychogenic nonepileptic attacks and the association of experts vs trainees with video interpretation.
Patients were included in the study if they were 18 years or older, provided voluntary consent, completed an HP, submitted an outpatient smartphone video of their primary ictal event, underwent inpatient VEM, had more than 95% of each survey completed by reviewers, and had a final diagnosis. Patients were excluded from the study if they were younger than 18 years, pregnant, had an incomplete/absent HP, had no smartphone video, did not undergo VEM, had a confirmed history of mixed epileptic and nonepileptic events (based on prior VEM), declined study participation, or did not provide informed consent.
Patients voluntarily supplied a witness-generated outpatient smartphone video during initial evaluation. Videos recorded were an observable event and those chosen represented the disabling/most common episode resulting in epilepsy clinic evaluation and prompting VEM. Instructions for acquiring and uploading smartphone video were provided to optimize recovery of information, requesting a recording of a single typical event, encompassing the entire body, lasting approximately 2 minutes, and demonstrating interactivity with the patient. Most patients submitted a single representative video as instructed; when several were submitted, the most informative and representative video was chosen based on the duration and historical depiction of ictal phenomenology. Data sharing was performed via HIPAA-compliant, password-protected data storage and transfer using a web-based or mobile software application (CaptureProof).
Eligible patients completed a single diagnostic VEM session in a hospital-based, academic, tertiary care epilepsy monitoring unit and received a final diagnosis. Video EEG monitoring was obtained at a National Association of Epilepsy Centers level IV epilepsy center.14 Final diagnosis following VEM was rendered by prominent epilepsy experts who were board certified in epilepsy and clinical neurophysiology. Reviewing physicians from different institutions were masked to VEM results.
Statistical analyses used Stata, version 15.1 (StataCorp). Diagnostic accuracy assessed the sensitivity, specificity, accuracy, and positive and negative predictive value of a smartphone video. Variable numbers of videos per reviewer were dealt with by additional sensitivity analyses to limit the effect of bias. Quantities were calculated when pooling all relevant assessments together across reviewers and smartphone videos. Paired comparison of correct diagnoses with HP vs smartphone video used the McNemar test. For odds and likelihood ratios, significance was assessed using the χ2 test and differences between experts and residents were obtained using the Mantel-Haenszel χ2 test of homogeneity. Two-sided P values less than .05 were statistically significant with 95% CIs. Differences in levels of confidence used the Mann-Whitney test for experts and residents, Kruskal-Wallis test for the 3 paroxysmal event types, and Wilcoxon signed rank test for treating physicians. The significance for area under the curve used a 1-sample test of proportions. Diagnostic accuracy for convulsive vs nonconvulsive events used the Pearson χ2 test or Fisher exact test when expected cell values were low. Inter-rater reliability was analyzed via the Scott/Fleiss κ coefficient.
A mean of 330 annual hospital admissions for VEM occur in a level IV epilepsy center for evaluation and management recommendations of uncontrolled seizures or “spells.”14 As shown in Figure 1, 50 patients were recruited to the study and 44 met inclusion criteria, representing fewer than 1% of those admitted for VEM. Six patients were excluded because VEM was not performed after enrollment. Demographics are summarized in Table 1. From the time of performing the index standard and the reference test, patients continued to receive the best medical treatment. No adverse events were encountered during the acquisition of a smartphone video or VEM in any patient. Patients identified events on smartphone videos and VEM to ensure targeted events were matched.
Standard HP was performed by 3 experts (L.J.H., M.G., and W.O.T.). Study surveys for HP and smartphone video review were available for analysis for 42 of 44 patients (95.5%). The predictive value for correct diagnoses from HP is listed in Table 2. For all events, 33 of 42 patients (78.6%) had a correct diagnosis derived from HP. When HP and smartphone video were combined, 40 of 42 (95.2%; 95% CI, 83.8%-99.4%) were correctly identified. Wrong diagnoses from HP (5 unknown, 4 incorrect) and smartphone video (4 unknown and 6 incorrect) compromised accuracy. Correct diagnosis after HP was no more likely than after smartphone video review when diagnoses disagreed. Overall, the diagnostic level of confidence after viewing a smartphone video (9/10) was higher than the diagnosis obtained with HP alone (8/10; P = .04).
Final clinical diagnosis followed a mean (SD) of 3.1 (1.9) days of VEM and included 11 patients with epileptic seizures (25.0%), 30 (68.2%) with psychogenic nonepileptic attacks, and 3 (6.8%) with physiologic nonepileptic events (Table 1). One of the patients with a psychogenic nonepileptic attack received a dual diagnosis at discharge (also received a diagnosis of physiologic nonepileptic events), but these secondary events (physiologic nonepileptic events/somnolence) were not the targeted events before VEM, were not paroxysmal, and could not be confused with the psychogenic nonepileptic attacks in question before admission. This incidence of dual diagnosed physiologic nonepileptic events and psychogenic nonepileptic attacks was 2.3% (1/44). This finding is substantially lower than prior estimates, likely because of the strict pre-VEM selection criteria to exclude patients with a confirmed history of mixed events.15
Overall, 34 patients (77.3%) had at least 1 typical outpatient event following VEM. Of the remaining 10 patients, 3 (6.8% of the cohort) had atypical events (eg, a patient with atypical tremors; a patient with “minor spells,” sleepiness, and hypnagogic jerks; and a patient with brief body jerks and subjective “jerklike feeling with internal warmth” with final diagnoses of physiologic nonepileptic events, psychogenic nonepileptic attacks/physiologic nonepileptic events, and psychogenic nonepileptic attacks, respectively); 2 (4.5% of the cohort) had epileptiform discharges on EEG features (both with generalized epileptiform discharges: one with generalized polyspike-and-wave with Jeavon syndrome and the other with 5-Hz generalized spike-and-wave and a final diagnosis of psychogenic nonepileptic attacks). Five patients (11.4% of cohort) did not have events or EEG findings recorded during VEM (including 3 final diagnoses of psychogenic nonepileptic attacks and 2 final diagnoses of epileptic seizures), in line with prior studies reporting no event capture rates of approximately 20% during VEM.16,17
Five hundred thirty smartphone videos were interpreted by 19 reviewers (10 epilepsy experts and 9 senior neurology residents) with a mean of 6.6 experts/video and 5.5 residents/video (range, 1-9 experts/video to 1-8 residents/video). Because the 44 smartphone videos assessed varied between reviewers, to prevent biasing toward reviewers who assessed more videos, a sensitivity analysis was performed. For the 19 reviewers, the median number of smartphone videos reviewed was 34 (range, 3-44), and 11 of the 19 reviewers (57.9%) interpreted more than 30 smartphone videos (eTables 1 and 2 in the Supplement). Overall sensitivity for correct interpretation of epileptic seizures by residents was less than experts (41.5% vs 76.8%) despite similar specificity (88.3% vs 93.3%). For experts predicting psychogenic nonepileptic attacks, sensitivity (89.9% vs 86.6%) and specificity (77.5% vs 52.4%) were greater than for residents. Smartphone videos averaged 2.23 minutes (range, 9 seconds-9 minutes and 5 seconds) compared with typical 60-minute HP (P < .001).
Expert interpretation of a smartphone video was accurate in predicting a VEM-confirmed diagnosis of epileptic seizures 89.1% of the time, with a specificity of 93.3% (Table 3). Diagnostic accuracy was reduced when 120 surveys (70 [58.3%] from experts, 50 [41.7%] from residents) with an unknown diagnosis were included (eTable 3 in the Supplement). Among event types, 28 of 137 epileptic seizures (20.4%) , 7 of 352 psychogenic nonepileptic attacks (21.6%), and 16 of 41 physiologic nonepileptic events (39.0%) were unknown. Eleven videos had 100% diagnostic accuracy by reviewing physicians; 8 were correctly predicted and 2 were unknowns by HP (1 excluded for duplicate responses). The odds of receiving a correct diagnosis were 5.45 times greater using smartphone videos alongside HP than with HP alone (P = .02; 95% CI, 1.01-54.3). Overall, patients were 6.65 times (95% CI, 4.49-9.83) more likely to receive a correct diagnosis of epileptic seizures and 2.58 times (95% CI, 2.03-3.27) more likely receive a diagnosis of psychogenic nonepileptic attacks from a smartphone video when they actually had the condition (P < .001) (eTable 4 in the Supplement). Resident responses were less accurate for all metrics involving epileptic seizures and psychogenic nonepileptic attacks, including accuracy, sensitivity, specificity, and positive and negative predictive values (Table 3). However, residents reported higher levels of confidence compared with experts (eFigure 2 in the Supplement). Overall, the median level of confidence was 8 of 10 (interquartile range [IQR], 6-9) for residents and 7 of 10 for experts (IQR, 6-9; P = .02). Levels of confidence also differed between epileptic seizures (median, 7; IQR, 5-8), psychogenic nonepileptic attacks (median, 8; IQR, 7-9), and physiologic nonepileptic events (median, 5; IQR, 4-8; P < .001). Resident interrater reliability was fair (0.30; 95% CI, 0.17%-0.42%) and experts moderate (0.44; 95% CI, 0.32%-0.56%).18
The video quality was adequate for interpretation in 10 of 11 patients with epileptic seizures (90.9%) and 25 of 30 with psychogenic nonepileptic attacks (83.3%) (eFigure 2 in the Supplement). Physiologic nonepileptic events lacked similar acceptability (1/3 [33.3%]), probably because of the low sample size. Variability in smartphone video interpretation was due to technique as opposed to technical limitations (eFigure 3 in the Supplement) and varied by clinician type.
Eleven of 44 smartphone videos (25.0%) had high diagnostic value, with 100% of reviewing physicians correctly predicting a final diagnosis from the video; these videos depicted psychogenic nonepileptic attacks. Epileptic seizures and physiologic nonepileptic events were never correctly diagnosed by 100% of the physicians reviewing the smartphone videos. When examined by event type, 10 of 11 smartphone videos with epileptic seizures (90.9%) and 16 of 30 with psychogenic nonepileptic attacks (53.3%) were nonconvulsive. Semiology was classified as motor and nonmotor.19 Overall, convulsive episodes (9 correct vs 2 incorrect/unknown reviews; P < .03) and psychogenic nonepileptic attacks (148 correct vs 13 incorrect/unknown; P < .001) were significantly more likely to be correctly diagnosed from video review (Figure 2) Events that were convulsive had a higher diagnostic accuracy than events classified as nonconvulsive for epileptic seizures (98.2%; 95% CI, 94.7-99.6 vs 72.4%; 95% CI, 66.3-77.8) and psychogenic nonepileptic attacks (95.7%; 95% CI, 91.4-98.3 vs 71.1%; 95% CI, 65.0-76.7).
This study provides class II evidence for the diagnostic value of smartphone videos in adult outpatients with epilepsy. An epilepsy diagnosis by experts is sensitive with an adequate HP; however, HP alone lacks specificity for some semiologies.2,20 We found that smartphone video diagnostic accuracy was comparable with HP when diagnostic unknowns/incorrect responses were excluded. An HP had approximately 1 in 5 patients with unknown/incorrect diagnoses, but combined with a smartphone video, the yield rose to 95.2%. A prior VEM study of 5 epileptologists analyzing 41 events (34 epileptic seizures) in 30 patients had an overall video diagnostic accuracy of 65% using charted description and 88% with video, similar to this study.21 In another study of 43 patients with video and EEG that relied on video recordings alone, the authors correctly identified 27 epileptic seizures with a sensitivity of 93% and specificity of 94%.22 The lower sensitivity of 76.8% in this study, despite a similar specificity of 93.3% by expert reviewers, likely reflects unbiased patient-derived video preselection. The odds of receiving a correct diagnosis were 5.45 times greater using smartphone video alongside HP than with HP alone (95% CI, 1.01-54.3; P = .02). Compared with a typical 60-minute HP, the time saved was significant.
Smartphone videos added diagnostic value to HP, especially for patients with psychogenic nonepileptic attacks. Almost 70% of patients who volunteered a smartphone video had psychogenic nonepileptic attacks, serving as a potential “red flag” for a seizure mimic. We speculate that the patient’s need to validate the attacks as real may be partial explanation. Semiology is a key clinical tool when evaluating individuals presenting with seizures.19 Behavioral description between caregivers and neurologists has only fair to moderate inter-rater reliability.23,24 In this real-world study, expert interrater reliability for a correct smartphone video diagnosis was moderate compared with other studies with excellent interrater reliability21 but similar to others involving psychogenic nonepileptic attacks.20 We found a greater predictability when motor signs were present. This aligns with one study involving video review22 but contrasts with another including nonmotor symptoms. In a pediatric cohort using HP, consistent predictability was present for motor seizures but only fair agreement for nonconvulsive seizures.25 The sensitivity of smartphone videos to predict psychogenic nonepileptic attacks was greater than for epileptic seizures. The subset of videos with 100% of reviewing physicians correctly predicting a final diagnosis in 25% was composed entirely of psychogenic nonepileptic attacks. A prospective VEM study found a similar number of cases (7/23 [30%]) in which a confident diagnosis was established by all 5 epilepsy expert raters; however, by contrast, diagnosis was split between epileptic seizures and psychogenic nonepileptic attacks based on video data alone.24
In clinical practice, some nonmotor semiologies (eg, eye closure, visual hallucinations, and ictal speech) have diagnostic, localizing, or lateralizing value. A large prospective observational study from India using 29 targeted signs in 312 patients with 624 home videos compared with 282 patients recording 572 seizures identified 3.3 features/videos vs 2.1 from medical history.26 One video study found a diagnostic difference for the area under the curve comparing neurologists with neurology trainees.27 We also saw similar differences using smartphone videos for the area under the curve, although our prospective, multiexpert, multicenter US study was all-inclusive without video selection and did not use a leave out analysis. Addressing diagnostic confidence, we found experts scored higher more consistently than residents on virtually every metric. However, despite the relative lack of predictability, the resident level of confidence was consistently greater. This reflects the effect of experience for certain disciplines and activities28 but contrasts with other procedures.29 Greater exposure to VEM during a neurology residency could help underscore how often one is incorrect about a seizure diagnosis.2,4,5,21 Hence, less predictive value was seen when nonexperts viewed smartphone videos; therefore, expert results likely underestimate the overall clinical effect.
To make an accurate diagnosis of epilepsy, neurologic assessment takes up to 60 minutes to complete and years of training to master.30 This raises issues for use in developing countries, which often lack neurologists and skilled EEG technologists.26 “Smart” telemedicine includes other neurologic conditions, such as stroke,30,31 multiple sclerosis,32 Parkinson disease,33,34 and sleep.35 The rise in demand of electronic consulting to resolve testing inconsistencies for EEG emphasizes the need for additional information.36 Patients with paroxysmal neurologic events and limited access to care26 may now obtain a semiology-based, time-efficient, expert opinion at low or no cost via HIPAA-secured video transfer. For a subset of patients (ie, psychogenic nonepileptic attacks), the potential to triage or eliminate VEM could provide substantial resource reallocation and cost savings. Using the move toward advancing mobile health, legal barriers, including state licensure and practice laws, credentialing, and liability concerns, may limit the use of interstate performance of telehealth.37
Despite robust findings, this study has limitations. Small sample sizes of epileptic seizures and physiologic nonepileptic events are a major limitation and do not incorporate the array of semiology during VEM.38 Several factors, including physician time for review, patients’ ability to transfer videos, strict inclusion criteria, and the lack of on-site study coordinators, hampered enrollment and retention. Still, multiple video views add strength to the conclusions. Temporal assessment was restricted to a single video at 1 point. Assessing longitudinal care with serial videos could reveal even greater benefits.35 Study clinicians are notable epilepsy experts; therefore, residents at comprehensive epilepsy centers are not generalizable to community clinicians. Moreover, smartphone videos were from patients identified at tertiary care referral centers and therefore represented atypical events presenting diagnostic challenges. Also, while our survey was not a validated tool, surveys are often used to capture the qualitative aspects of semiology.2,20,26,27 We recognize that there could be inherent statistical bias by treating smartphone video reviews as independent events. The pragmatic, real-world design of this study resulted in findings derived from pooled data composed of multiple video views. Aggregating measures of accuracy across many different reviewers serves as a limiting factor without weighting the results. Furthermore, we recognize that omitting reviews when a diagnosis could not be determined from smartphone video could distort statistical results (as demonstrated in eTable 3 in the Supplement). However, we felt this exclusion was warranted, as even in the presence of a stated unknown diagnosis smartphone video may still provide additive clues.
Several practical limitations require comment. More generally, using smartphone videos as a diagnostic tool may be further limited by (1) the degree of digital sophistication, (2) preserved patient consciousness and motor abilities, observer in proximity who is able to conduct video recording, and (3) costs to maintain electronic platforms and wireless connection with established video transfer privacy.39 Emphasizing recording the ictus (vs postictal), event duration, whole body focus, and the interaction to demonstrate impaired awareness are helpful tips to enhance diagnostic reliability. Overall, the inconsistencies associated with the witnessed description of an event40 are offset by accuracy and time-efficiency using adjunctive smartphone videos. Therefore, more unknown and inexperienced personnel obtaining smartphone videos may reflect an underestimation of the true diagnostic potential to improve diagnostic accuracy.
Secure big data transfer of outpatient smartphone videos is feasible between multiple epilepsy centers to provide diagnostic interpretation. Smartphone videos have added value for final VEM diagnosis of epileptic seizures. The usefulness of smartphone videos is most specific for patients with psychogenic nonepileptic attacks, especially when motor signs are present. Expert evaluation of smartphone videos reflects a modern mobile health tool and is a useful adjunct to HP. The ability to identify patients without epilepsy before inpatient VEM could triage and realign resources to patients for whom the need is high (ie, surgery). The adjunctive use of adult outpatient smartphone videos is expected to change the practice of epilepsy diagnosis and management currently based solely on HP.
Accepted for Publication: November 23, 2019.
Corresponding Author: William O. Tatum, DO, Department of Neurology, Mayo Clinic, 4500 San Pablo Rd, Mangurian Building, 4th Floor, Jacksonville, FL 32224 (email@example.com).
Published Online: January 21, 2020. doi:10.1001/jamaneurol.2019.4785
Author Contributions: Dr Tatum had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Tatum, LaFrance, Chen, Hixson, Benbadis, Cascino.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Tatum, Acton, LaFrance, Hixson, Benbadis, Cascino.
Critical revision of the manuscript for important intellectual content: Tatum, Hirsch, Gelfand, Acton, LaFrance, Duckrow, Chen, Blum, Hixson, Drazkowski, Cascino.
Statistical analysis: Acton, Hixson.
Obtained funding: Tatum.
Administrative, technical, or material support: Tatum, Duckrow, Chen, Blum, Hixson, Cascino.
Supervision: Tatum, Hixson, Benbadis, Cascino.
Other - data analysis and scoring: Blum.
Conflict of Interest Disclosures: Dr Tatum reports receiving research support from the Mayo Clinic, Esai, Engage, Liva Nova, and the Martin Family Foundation; royalties from Demos Publishers Inc and Springer Publishing; and speaker fees from the American Academy of Neurology, American Epilepsy Society, and the American Clinical Neurophysiology Society. He serves as editor in chief for Epilepsy & Behavior Reports and holds patents on intraoperative monitoring sensor devices ( #62527896 and #62770362). Dr Hirsch reported personal fees from Adamas, Aquestive, Ceribell, Eisai, Marinus, Medtronic, Neuropace, and UCB outside the submitted work and royalties for authoring chapters for UpToDate-Neurology and from Wiley for co-authoring the book Atlas of EEG in Critical Care. Dr Gelfand reported grants from Aquestive, Engage, UCB, Biogen, Livanova, SK Pharma, and Eisai outside the submitted work. Dr Blum served as a medical director for ambulatory electroencephalogram services for United Neurodiagnostics and received nonfinancial support from Empatica outside the submitted work; in addition, Dr Blum had a patent to Springer Publishing with royalties paid and received research grant support from the US Department of Defense for Biomarkers of Seizures. No other disclosures were reported.
Funding/Support: Support was provided in the form of a $5000 grant from Mayo Clinic in Florida for initial startup coordinator fees during the multisite initiation of the study.
Role of the Funder/Sponsor: The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
OSmartViE Investigators: Rachel Beekman, MD, Yale University, New Haven, Connecticut; Diego Carvalho, MD, Mayo Clinic, Rochester, Minnesota; Iris Vanessa Marin Collazo, MD, Mayo Clinic, Jacksonville, Florida; Erin Coonan, BS, Boston College, Chestnut Hill, Massachusetts; Jon Kleen, MD, University of California, San Francisco; Alfonso Lopez, MD, Mayo Clinic, Jacksonville, Florida; Erin Okazaki, MD, Mayo Clinic, Phoenix, Arizona; Ashish Ranpura, MD, Yale University, New Haven, Connecticut; Laura Mainardi Villarino, MD, University of Pennsylvania, Philadelphia; and Scott Yuan, MD, Yale University, New Haven, Connecticut.
Additional Contributions: We thank Andrew Cucchiara, MD, University of Pennsylvania, and Mike Heckman, MD, Mayo Clinic, Jacksonville, Florida, for their supplemental statistical support. We also thank CaptureProof® for in-kind support for use of the software to provide secure storage and transfer of smartphone videos and to Meghan Conroy, chief executive officer, for her support of the study. We thank Alison Dowdell, BA, Mayo Clinic, for assistance in academic support. None of these individuals were compensated for their contributions.