[Skip to Navigation]
Sign In
Research Letter
August 10, 2020

Saliency-Driven Visual Search Performance in Toddlers With Low– vs High–Touch Screen Use

Author Affiliations
  • 1Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London, England
  • 2Biostatistics and Health Informatics Department, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, England
  • 3University of East Anglia, Norwich, England
JAMA Pediatr. 2021;175(1):96-97. doi:10.1001/jamapediatrics.2020.2344

During toddlerhood, a peak period of neurocognitive development, increased exposure to sensory stimulation through touch screen use, may influence developing attentional control.1 While TV’s rapidly changing, noncontingent flow of sensory information has been hypothesized to lead to difficulties voluntarily focusing attention,2 video gaming’s contingent and cognitively demanding sensory environments may improve visual processing and attention.3 Toddler touch screen use involves both exogenous attention, driven by salient audio-visual features, and endogenous/voluntary control, eg, video selection and app use.4,5

The current study compared high– and low–touch screen users on a gaze-contingent visual search paradigm,6 assessing exogenous, saliency-based attention (single-feature trials), and endogenous attention control (conjunction trials).


Individuals aged 12 months were recruited from October 2015 to March 2016 (as part of the TABLET project5) and followed up longitudinally at 18 months and 3.5 years. Parents gave informed written consent, and the Birkbeck, University of London institutional review board approved this study. Before each visit, parents were asked, “On a typical day, how long does your child spend using a touchscreen device (tablet, smartphone or touchscreen laptop)?” Participants were recruited as high users and low users based on median use of 10 minutes per day reported in a previous survey sample.5 At 18 months and 3.5 years, user groups were reassigned using the within-sample median (15 minutes per day). At recruitment, groups were matched on developmental level (Mullen Scales of Early Learning), age, sex, background TV (parent-reported minutes per day), and mother’s education.

The visual search task was administered at 18 months and 3.5 years (Tobii TX300 eye tracker with 120-Hz tracking, 60-cm distance, 5-point calibration). Arrays were presented (single feature [target red apple among blue apples; set sizes 5 and 9] or conjunction [target red apple among blue apples and slices of red apples; set sizes 5, 9, and 13; only set sizes matched across conditions were analyzed, ie, 5 and 9) for 4 seconds or until the target was fixated. Trials were presented continuously, grouped into blocks: (1) 3 single feature, fixed order; (2) 1 single feature, 9 conjunction, randomized; and (3) 4 single feature, 9 conjunction, randomized. P values were 2-sided and were significant at less than .05. SPSS version (SPSS Inc) was used. Analysis began November 2018 and ended in November 2019.


Of 56 infants recruited, 49 were followed up longitudinally at 18 months and 46 were followed up at 3.5 years. Data quality and accuracy did not differ significantly across groups. Linear generalized estimating equations for saccadic reaction time (SRT) (Figure) were run with an unstructured correlation matrix (deviation from preregistered 3.5-year analysis of variance; https://osf.io/fxu7y) to include missing data and treat group as a time-varying predictor (some children changed user groups over time; usage correlations: 12 to 18 months, Spearman rs = 0.78; 18 months to 3.5 years, Spearman rs = 0.33; 12 months to 3.5 years, Spearman rs = 0.31).

Figure.  Visual Search Reaction Times (SRTs)
Visual Search Reaction Times (SRTs)

Shaded areas represent standard error of the mean.

User groups did not differ significantly in conjunction SRTs, but high users were faster than low users in single-feature trials (Table). Post hoc analyses showed faster SRTs for high users vs low users in block 1 single-feature trials (Bonferroni-corrected P = .003; mean difference = 360 milliseconds; SE = 104 milliseconds) with no group difference in remaining single trials (Bonferroni-corrected P = .75, mean difference = 118 milliseconds, SE = 77 milliseconds).

Table.  Generalized Estimating Equations for Visual Search Saccadic Reaction Times Predicted by Concurrent Usage Group, Visit, Search Type, and Set Size
Generalized Estimating Equations for Visual Search Saccadic Reaction Times Predicted by Concurrent Usage Group, Visit, Search Type, and Set Size

Follow-up multiple regressions tested the specificity of concurrent vs longitudinal associations. At 18 months, duration of concurrent use was associated with single-feature SRT (β = −0.62; P = .03), over and above 12-month usage (β = 0.48; P = .09). At 3.5 years, concurrent use was marginally associated with single-feature SRT (β = −0.35; P = .05), with no association at 12 (β = 0.18; P = .65) or 18 months (β = −0.02; P = .96).


Toddler touch screen use is associated with faster single feature but not conjunction search, indicative of greater saliency-driven attention without impaired endogenous control. Results are specific to concurrent usage, suggesting recent touch screen experience may prime attention for exogenous control. Faster high-user SRTs in block 1 suggests a possible saliency bias coming into the task, rather than faster within-task learning. The real-world consequences, particularly when saliency and endogenous goals conflict (eg, focusing on schoolwork in a busy classroom), remain to be established. Future studies should use objective tracking of the child’s complex media environment to assess the specificity across platforms, content, and type of use, as well as establish whether touch screen use has a causal influence on attention control.

Back to top
Article Information

Corresponding Author: Tim J. Smith, PhD, Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England (tj.smith@bbk.ac.uk).

Accepted for Publication: March 10, 2020.

Published Online: August 10, 2020. doi:10.1001/jamapediatrics.2020.2344

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Portugal AM et al. JAMA Pediatrics.

Author Contributions: Drs Portugal and Bedford had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs Portugal and Bedford are co–first authors.

Concept and design: Portugal, Bedford, Smith.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Portugal, Bedford, Smith.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Portugal, Bedford, Smith.

Obtained funding: Smith.

Administrative, technical, or material support: Portugal, Cheung, Smith.

Supervision: Bedford, Gliga, Smith.

Conflict of Interest Disclosures: None reported.

Funding/Support: The TABLET Project was funded by a Philip Leverhulme Prize (PLP-2013–028) to Dr Smith. Dr Portugal was supported by an Economic and Social Research Council studentship. Dr Bedford was supported by a Sir Henry Wellcome Postdoctoral Fellowship and King’s Prize Fellowship (204823/Z/16/Z).

Role of the Funder/Sponsor: No funders had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contribution: We are very grateful to all the families who took part in the TABLET study. During this project’s inception and initial execution, the late Annette Karmiloff-Smith, PhD (Birkbeck, University of London), was a huge inspiration and champion. Thanks also to Luke Mason, PhD, and Irati Saez de Urabain, PhD, from Birkbeck, University of London for their work on programming and implementing the paradigms. No compensation was provided.

Rothbart  MK, Posner  MI.  The developing brain in a multitasking world.   Dev Rev. 2015;35:42-63. doi:10.1016/j.dr.2014.12.006PubMedGoogle ScholarCrossref
Christakis  DA.  The effects of infant media usage: what do we know and what should we learn?   Acta Paediatr. 2009;98(1):8-16. doi:10.1111/j.1651-2227.2008.01027.xPubMedGoogle ScholarCrossref
Green  CS, Bavelier  D.  Exercising your brain: a review of human brain plasticity and training-induced learning.   Psychol Aging. 2008;23(4):692-701. doi:10.1037/a0014345PubMedGoogle ScholarCrossref
Kabali  HK, Irigoyen  MM, Nunez-Davis  R,  et al.  Exposure and use of mobile media devices by young children.   Pediatrics. 2015;136(6):1044-1050. doi:10.1542/peds.2015-2151PubMedGoogle ScholarCrossref
Bedford  R, Saez de Urabain  IR, Cheung  CHM, Karmiloff-Smith  A, Smith  TJ.  Toddlers’ fine motor milestone achievement is associated with early touchscreen scrolling.   Front Psychol. 2016;7(181):1108. doi:10.3389/fpsyg.2016.01108PubMedGoogle Scholar
Kaldy  Z, Kraper  C, Carter  AS, Blaser  E.  Toddlers with autism spectrum disorder are more successful at visual search than typically developing toddlers.   Dev Sci. 2011;14(5):980-988. doi:10.1111/j.1467-7687.2011.01053.xPubMedGoogle ScholarCrossref
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words