Publication:
Use of Crowd Innovation to Develop an Artificial Intelligence–Based Solution for Radiation Therapy Targeting

No Thumbnail Available

Date

2019-05-01

Journal Title

Journal ISSN

Volume Title

Publisher

American Medical Association (AMA)
The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Mak, Raymond H., Michael G. Endres, Jin Hyun Paik, Rinat A. Sergeev, Hugo Aerts, Christopher L. Williams, Karim R. Lakhani, and Eva C. Guinan. "Use of Crowd Innovation to Develop an Artificial Intelligence-Based Solution for Radiation Therapy Targeting." JAMA Oncology 5, no. 5 (May 2019): 654–661.

Research Data

Abstract

Importance: Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant inter-observer variation. Objective: To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting. Design: We conducted a 10-week, prize-based, online, three-phase challenge (prizes totaled $55,000). A well-curated dataset, including CT scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images/scan; 77,942 images in total; 8,144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician. Main Outcome: AI algorithms generated by contestants were automatically scored on an independent dataset that was withheld from contestants, and performance was ranked using quantitative metrics that evaluated overlap of each algorithm’s automated segmentations with the expert’s segmentations. Performance was further benchmarked against human expert inter-observer and intra-observer variation.

Description

Other Available Sources

Keywords

General Medicine

Terms of Use

Metadata Only

Endorsement

Review

Supplemented By

Referenced By

Related Stories