Skip to main content

News & Highlights

Topics: Crowd innovation, Health Disparities, In the News

Open Innovation Contestants Build AI-Based Cancer Tool

Radiation oncologists are few in number, especially if you are nowhere near a cancer facility. Could artificial intelligence be used to deliver an oncologist's skills for radiation therapy? Karim R. Lakhani discusses a unique open innovation experiment. Eva Guinan, MD, faculty lead for Harvard Catalyst's Translational Innovator program, is the senior author on related paper.

Photo: iStock

Radiation therapy can be lifesaving for lung cancer patients. The first step, though, is having a trained, skilled oncologist who knows how to best segment or mark off the tumor for radiation. This expertise is vital for targeting the tumor and controlling the radiation’s toxicity. Segmenting is difficult and time-consuming even in countries with sufficient medical resources, much less in developing countries where the need is great but fewer personnel have the time and training.

Harvard researchers wondered: Could programmers develop artificial intelligence solutions for segmenting tumors like a trained oncologist? How about setting up an online competition to find out?

An article being published April 18 in JAMA Oncology, a journal of the American Medical Association, describes the crowdsourcing contest and the potential breakthrough for sharing medical knowledge globally. The article, “Use of Crowd Innovation to Develop an Artificial Intelligence-Based Solution for Radiation Therapy Targeting,” is co-authored by Harvard Business School Professor Karim R. Lakhani and seven colleagues with expertise in radiation therapy, oncology, and crowd innovation.

It tells how 34 contestants competing anonymously online over 10 weeks reviewed almost 78,000 images and submitted 45 algorithms. The best five, for prizes totaling $55,000, were assessed to be as good as those of oncologists. Among the study’s conclusions, “A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy.”

In an email interview, we asked Lakhani, the Charles Edward Wilson Professor of Business Administration, and colleagues Dr. Eva Guinan and Jin Paik more about the study and results. They are referred to as the LISH team below.

Martha Lagace: How did you learn about the problem and connect with like-minded medical colleagues?

LISH team: The Laboratory for Innovation Science at Harvard (LISH) has a long history of working together with Harvard Catalyst at Harvard Medical School to identify interesting innovation and process problems in translational biomedical areas. We have conducted many research projects together and published a good number of papers in peer-reviewed and practitioner journals. Dr. Guinan is one of our co-directors at LISH and she is a professor in radiation oncology at HMS and the Dana-Farber Cancer Institute.

Part of the lab’s mission is to address problems with real-world applications. This project was a natural outgrowth of that relationship and rapidly incorporated the expertise and perspectives of Dr. Raymond Mak, a radiation oncologist who became an important member of the team for this study.

In the process of identifying problems in healthcare, which we do frequently at the LISH, we found that exploration in this area had the potential for multiple breakthroughs related to the specific tumor segmentation task of radiation oncologists while also addressing technological issues (e.g., reframing and conducting sequential competition phases).

Lagace: What made this problem in cancer treatment suitable for crowd innovation?

LISH team: LISH has longstanding interest in and experience with solving problems in computer vision, image analysis, and advanced analytics through crowdsourcing. We have extended our work and research into the field of medical image analysis. So rather than focus on diagnostic improvements, since significant published and implemented work in AI/algorithmic work already exists, we wanted to direct our interests to a therapeutic problem.

The highly technical, image-related work of radiation oncology planning was highly suitable for leveraging crowd innovation. Dr. Mak, a thoracic radiation oncology expert in the Brigham and Women’s Hospital/ Dana-Farber Cancer Institute radiation oncology program, was a motivated and creative collaborator in exploring and then implementing and evaluating this initiative. While there are other medical image analysis contests that have been conducted, such as Dream Challenge (involving experts as solvers), we believe this work is unique in that it specifically recruited non-domain experts via the Topcoder platform.

Lagace: How did you design and carry out the study? Why Topcoder?

LISH team: Topcoder has a community of more than 1 million members, many of whom regularly compete on imaging problems from all fields and across all industries.

Through our connection with Harvard Catalyst and the Dana-Farber Cancer Institute, we identified the potential problem and then solidified the relationship with Dr. Mak. We then identified and addressed regulatory and practical issues like consent, access to images, and annotation and development of the video teaching tool.

LISH researchers specialize in atomizing complex problems into granular tasks and finding the appropriate platforms for execution. In that sense, the tasks were defined for data scientists and algorithm developers who do not necessarily have that domain knowledge, but possess the skills necessary for algorithm development.

We met regularly during the design phase to define and redefine outcomes. The challenge was divided in three phases to provide flexibility to adjust based on outcomes in each phase. We altered our strategy as needed to move between competition and collaboration, which assisted in remedies for design inefficiencies.

Lagace: As you write in JAMA Oncology, the study’s results could help wherever in the world there is a shortage of time and skilled personnel able to do accurate planning for radiation therapy. How would skill transfer work for developing countries?

In these contests, the problem owner only pays for winning solutions with the platform acting as the broker.

LISH team: Assistance with radiation planning is only relevant for sites where a radiation machine is present. This selection factor obviates those places where resource constraints (i.e., reliable electricity) would be limiting. In the context of sufficient infrastructure, and this indeed exists in many otherwise quite limited settings, the transfer of this AI tool could facilitate the efforts of professionals there by decreasing the time required per case. It provides a good solution where expertise is limited if no one has significant thoracic experience, for example. It creates the potential for institutional interaction, too. This also has a potential impact on quality assurance, training, and improved performance, interpretation, and correlation of clinical trials.

Lagace: Why do contestants get involved?

LISH team: Contestants on platforms like Topcoder and Kaggle often choose to remain anonymous and are only known publicly by their handles. In these contests, the problem owner (in this case, LISH) only pays for winning solutions with the platform acting as the broker. In simple terms, the intellectual property is transferred from the winner(s) to the platform, then onto the end user or problem owner seeking solutions.

The main motivation is financial, but other factors contribute. Some contestants like belonging to a community. Others like recognition via ranking on the platform or beating “the best of the best.” Recall that most people lose contests, yet people continually compete.

Lagace: What’s next for crowdsourcing contests in addressing biomedical problems?

LISH team: There is an endless array of potential tools possible for better diagnosis by image analysis. However, there are reasons for why such tools have been relatively slow in development and implementation. The impediments to progress include sufficient well-annotated images for learning or competition, regulatory issues related to image and data de-identification and access, and development of contest designs that motivate participation and provide information structures that support creative re-imagining of the problem and a solution.

Perhaps the biggest difficulty resides in the availability of gold standards. Competitions are based on comparison of a submission to some objective correct answer. In many medical settings, the relationship between an image and an accurate diagnosis can be difficult to ascertain. For example, many pathologic diagnoses lack a sine qua non but instead rely on a synthesis of many findings into a consistent, but not definitive, diagnosis.

Lagace: How do crowdsourcing contests fit in or not with the usual pace of innovation in medicine or oncology in particular?

LISH team: The pace is quite fast relative to what is in the literature for Google, etc. AI is already being used as an adjunct to diagnosis in some settings. However, the bar for US Food & Drug Administration approval for a diagnostic test is appropriately high, and “products” will need to meet this bar whether in relation to image analysis or analytics from electronic health records.

Lagace: Looking to the future, what other challenges, medical or otherwise, could open source innovation help with?

LISH team: LISH has collaborated with elite scientific institutions for over a decade on bringing open innovation approaches to their internal problems. We have been fortunate to have NASA, Broad Institute, Scripps Research Institute, Harvard Medical School, and various other federal agencies as partners in our work to simultaneously solve tough technical challenges while studying the best design of crowd programs. Some of our solutions are even working on the International Space Station! We have learned that almost any technical challenge can benefit from having an open innovation approach. The key is to develop a problem statement that can be accessible and comprehensible to individuals outside the scientific domain of the problem. In addition, the sponsors need to develop an a priori view of how solutions will be evaluated.

Both of these activities are non-trivial, and the best scientific minds struggle with clearly articulating problems and defining solution criteria. However, our work has shown that this is possible and teachable. Even if a viable solution is not developed through a challenge, the fact that multiple parallel solving attempts occurred informs the sponsor of the feasible path to a potential solution or a complete reframing or reconsideration of the problem. Thus we encourage many more organizations to seriously include open innovation as part of their portfolios of approaches they use to solve technical challenges.

Lagace: For business leaders in our audience, what can they learn from this oncology project about problem-solving?

LISH team: Open innovation is really about separating problem definition and solution assessment from the solving phase. In most cases, innovators engage simultaneously in definition/solution/assessment stages and iteratively define the problem and its solution. Open innovation breaks this vertical integration and forces problem holders to think critically about the problem definition. We have found that the problem definition phase—although tough and unnatural to most innovators—brings significant clarity to the problem holders and can assist even in their own solution finding approaches.

Lagace: What’s next for you?

LISH team: LISH would like to launch more crowdsourcing contests with real impact in healthcare. We have two programs underway with the Connectivity MAP at the Broad Institute for accelerating drug discovery, and one program with Massachusetts General Hospital on genomic sequencing. Our goal is to tackle projects that will scale and have impact similar to our work at NASA’s Center of Excellence for Collaborative Innovation that launches contests for NASA and the federal government at scale. NASA has completed over 350 crowdsourcing projects, many with the assistance of our lab.

Originally published on Harvard Business School’s Working Knowledge.

Sign up to receive our newsletter: courses, funding, events, and resources.