Advances in machine learning, specifically the subfield of deep learning, have produced algorithms that perform image-based diagnostic tasks with accuracy approaching or exceeding that of trained physicians. Despite their well-documented successes, these machine learning algorithms are vulnerable to cognitive and technical bias,1 including bias introduced when an insufficient quantity or diversity of data is used to train an algorithm.2,3 We investigated an understudied source of systemic bias in clinical applications of deep learning—the geographic distribution of patient cohorts used to train algorithms.