[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 18.207.255.49. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Original Investigation
April 18, 2019

Use of Crowd Innovation to Develop an Artificial Intelligence–Based Solution for Radiation Therapy Targeting

Author Affiliations
  • 1Department of Radiation Oncology, Brigham and Women’s Hospital/Dana-Farber Cancer Institute/Harvard Medical School, Boston, Massachusetts
  • 2Laboratory for Innovation Science at Harvard, Harvard University, Boston, Massachusetts
  • 3Institute for Quantitative Social Science, Harvard University, Cambridge, Massachusetts
  • 4Harvard Business School, Boston, Massachusetts
  • 5Department of Radiology, Brigham and Women’s Hospital, Boston, Massachusetts
  • 6The National Bureau of Economic Research, Cambridge, Massachusetts
JAMA Oncol. 2019;5(5):654-661. doi:10.1001/jamaoncol.2019.0159
Key Points

Question  Can crowd innovation be used to rapidly prototype artificial intelligence (AI) solutions that automatically segment lung tumors for radiation therapy targeting, and can AI performance match expert radiation oncologists for this time- and training-intensive task?

Findings  A 3-phase, prize-based crowd innovation challenge over 10 weeks, including 34 contestants who submitted 45 algorithms, identified multiple AI solutions that replicated the accuracy of an expert radiation oncologist in targeting lung tumors and performed the task more rapidly.

Meaning  On-demand, crowdsourcing methods can be used to rapidly prototype AI algorithms to replicate and transfer expert skill and knowledge to underresourced health care settings and improve the quality of radiation therapy globally.

Abstract

Importance  Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant interobserver variation.

Objective  To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting.

Design, Setting, and Participants  We conducted a 10-week, prize-based, online, 3-phase challenge (prizes totaled $55 000). A well-curated data set, including computed tomographic (CT) scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images per scan; 77 942 images in total; 8144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician.

Main Outcomes and Measures  The AI algorithms generated by contestants were automatically scored on an independent data set that was withheld from contestants, and performance ranked using quantitative metrics that evaluated overlap of each algorithm’s automated segmentations with the expert’s segmentations. Performance was further benchmarked against human expert interobserver and intraobserver variation.

Results  A total of 564 contestants from 62 countries registered for this challenge, and 34 (6%) submitted algorithms. The automated segmentations produced by the top 5 AI algorithms, when combined using an ensemble model, had an accuracy (Dice coefficient = 0.79) that was within the benchmark of mean interobserver variation measured between 6 human experts. For phase 1, the top 7 algorithms had average custom segmentation scores (S scores) on the holdout data set ranging from 0.15 to 0.38, and suboptimal performance using relative measures of error. The average S scores for phase 2 increased to 0.53 to 0.57, with a similar improvement in other performance metrics. In phase 3, performance of the top algorithm increased by an additional 9%. Combining the top 5 algorithms from phase 2 and phase 3 using an ensemble model, yielded an additional 9% to 12% improvement in performance with a final S score reaching 0.68.

Conclusions and Relevance  A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings.

×