The Bonnie J. Addario Lung Cancer Foundation recently brought together more than 650 data scientists, engineers and designers from 68 countries to build open source tools to fight the world’s deadliest cancer.
During the Concept to Clinic Challenge, contributors built state-of-the-art algorithms applied to the detection and assessment of lung nodules from CT scans to bring advancements in machine learning into medical clinics. The foundation put up $100,000 in prizes for top contributors.
Willi Gierke, a student who is getting his masters in IT/systems engineering at Hasso Plattner Institute in Potsdam, Germany, was the top prizewinner, taking home more than $30,000.
The code developed during this challenge is openly available for anyone to learn from and use. As an open source project, the impact from the contributions made during this challenge should extend well beyond the boundaries of one prototype or repository. Lung cancer kills more people than breast, colon and prostate cancers combined.
"In previous coursework, I automatically segmented brain tumors in MRI scans using deep learning," Gierke explained. "I was amazed by how simple it was to achieve reasonable results without needing any domain experience. I read about the foundation’s challenge about extending and porting developed algorithms to a project that can be used by clinicians to detect lung cancer earlier, which saves lives."
"Deep learning models learn important features of the given data by themselves such that they can predict the output they were trained on."
Willi Gierke, Hasso Plattner Institute
Gierke said he could relate to the goal of this challenge: Even if you build algorithms that achieve human-level performance, they have a very small impact if no one puts them into practice. He also hoped to learn from the other contributors with different backgrounds from all over the world.
Beginning his effort, he first read the technical reports, documentation and code bases from the top 10 algorithms of the 2017 Data Science Bowl, where competitors created algorithms from lung scans from patients provided by the National Cancer Institute, summarized how the approaches work and estimated how easy it is to include the existing code in the project.
"This way, we were able to make educated decisions about what preprocessing techniques to use and which model ideas to extend," he said. "We used the publicly available database provided by the Lung Image Database Consortium and Image Database Resource Initiative. The database contains 1,018 thoracic CT scans. Four radiologists independently annotated nodules in each scan."
Apart from a pixel-perfect segmentation, each annotation also contains metadata such as its estimated malignancy. Using data augmentation such as randomly zooming, shifting or rotating a scan by small degrees, Gierke explained, he further was able to artificially extend the scans to learn on even more data.
"Each of the top 10 algorithms used deep learning, which reveals great advances over traditional machine learning approaches in numerous fields of research," he said. "Instead of having to explicitly formulate features that characterize cancerous nodules – shape, density, location, margin – deep learning models learn important features of the given data by themselves such that they can predict the output they were trained on."
A deep learning model basically applies a nonlinear function to the given input in order to generate the output, Gierke explained. In the important step, the error between the predicted output and the desired output is calculated and the function is adapted such that the error decreases a little, he added.
"This is done hundreds of times for all scans until the error no longer decreases and the model learned its internal state and the important features," he said.
"The big disadvantage of deep learning is that it is data hungry and computationally expensive," he added. "Another challenge especially relevant for the medical domain is that it is very hard to interpret why a model predicted a certain outcome. Therefore, we can only evaluate the model by how well it performs on data it has never seen before."
Given a scan, the models Gierke built predict the centroid of nodules and the probabilities of them being cancerous, and they also predict the complete Boolean segmentation mask of a scan, whether each pixel represents tissue of a malicious nodule or not.
In the highly technical ways the success of machine learning and deep learning algorithms are measured, the judges of the Bonnie J. Addario Lung Cancer Foundation Concept to Clinic Challenge declared Gierke’s algorithms a big success. Gierke explained.
"It is hardly possible to interpret what a model learned and why it generates certain decisions," he said. "Therefore, we are supposed to infer the quality of the models from their performances on data they have not seen so far. For nodule segmentation, one metric we use is the Intersection over Union. It measures how much the predicted and the true nodule overlap, normalized by the union of both areas."
If this metric is close to one for a lot of predicted nodules in unseen scans, it is inferred that the model generalizes well, he explained. This and other metrics, he added, show a satisfying performance, which is comparable with the results documented in the technical reports of the participants of the Data Science Bowl.
"However, we are certain that more data and more computation power can improve the algorithms even further," he said. "For this reason, it is possible for clinicians to manually label annotations in new lung scans and to specify the probability that an annotation is maliciously using the software. These new labels can then be used to further train the models such that they become even better and learn from mistakes since they are taught by a teacher."
Email the writer: [email protected]
Source: Read Full Article