Research featured in the special focus issue on Imaging in Diagnosis and Treatment of Lung Cancer outlines standardized approaches to measure and compare tumor size, as well as to validate the accuracy of such measurements under defined settings. The appropriate validation of these tools is a critical new area of research as important new applications for these tools are being explored in pharmaceutical drug development.
Lung cancer is the leading cause of cancer death globally, according to the World Health Organization. Every year 1.3 million people die from the disease, and someone in the United States diagnosed with lung cancer today will typically have only a 15 percent chance of being alive five years from now.
Many lives can be saved thanks to modern medicine, but one of the critical issues for effective treatment is the ability of doctors to accurately image tumors in the lung. Before, during, and after treatment, radiologists scan the lungs and, depending on what these scans show, they diagnose the cancer and shape the treatment accordingly. The four research papers in the current issue of Optics Express
, which are highlighted below, address issues that relate to the development of this new field of quantitative imaging within the challenging clinical problem of lung cancer therapeutics.
The use of precise quantitation tools may allow for more rapid evaluation of the success or failure of drug candidates in clinical trials. Regulatory agencies such as the U.S. Food and Drug Administration (FDA) require consistent, objective performance for measurement tools. The open-source measurement tools described in Optics Express can be applied to analyze the growing number of new public data sets and develop computer algorithms that can automatically calculate the change in the 3-D volume of a tumor.
The research takes advantage of Interactive Science Publishing, a new paradigm for the publication of scientific images developed by OSA. This initiative, developed in partnership with the National Library of Medicine at the National Institutes of Health, the U.S. Air Force Office of Scientific Research, and Kitware Inc., allows readers to view and interact with underlying 2-D and 3-D source data, such as CT scans. It expands upon traditional research by allowing scientists to objectively compare the performance of different technologies. To achieve this, ISP provides readers with a free 3-D visualization application that obtains images from a Web-based image archive called MIDAS (http://www.midas.osa.org
The Optics Express
focus issue on lung cancer imaging is part of an ongoing collaboration between OSA and the Prevent Cancer Foundation that has convened a series of workshops to accelerate progress in this area. The issue editors are James L. Mulshine, vice president for research at Rush University; Thomas M. Baer, executive director at the Stanford Photonics Research Center; and Rick Avila, senior director of healthcare solutions at Kitware, Inc. The special issue builds on background information on the application of image processing approaches in lung cancer drug development published in a previous OSA monograph called Quantitative Imaging Tools for Lung Cancer Drug Assessment.
RESEARCH PAPER HIGHLIGHTS:
Volumetrics Detects Tiny Tumor Size Differences
The current procedure radiologists use to measure changes to tumor size, called Response Evaluation Criteria in Solid Tumors (RECIST), can reliably detect a change only in volume greater than approximately 70 percent. To track how a tumor responds to treatment, doctors compare CT scans taken months apart and look to see if the maximum diameter of its cross-section has shrunk.
Recent studies have explored a more sensitive image measurement technique that could reduce this time by providing more detail: volumetric CT scanning, which images the tumor at high resolution in order to permit automated calculation of its 3-D volume. Emerging evidence suggests that "volumetric" measurements of lung nodule size could provide a better way to image a tumor by accounting for its shape and factoring out irregularities in its growth.
Zachary Levine of the National Institute of Standards and Technology (NIST) in Gaithersburg, Md., and his colleagues believe that 3-D volumetric approaches can detect smaller changes in tumor size, and they report the data to back up this claim in their Optics Express paper.
Levine and his team fabricated, and then scanned, 314 round objects with systematic variations in size and shape, creating 3-D images built of "voxels" used to calculate its volume. Each object's surface was also measured mechanically on a Coordinate Measuring Machine in NIST's Precision Engineering Division. Their results suggest that the volumetric approach could reliably detect a 5 percent difference in volume—at least in this idealized setup.
"On simple objects, we're showing that there is about a factor of 10 less variability associated with the measurement process," says Levine. Though he has no clinical data, Levine suspects that size changes of 20 percent could be realistically detected in actual lung nodules in the body.
This sensitivity, he hopes, could potentially speed up the drug trials on cancer treatments and the processes of patient diagnosis and treatment.
Levine's paper is accompanied by interactive 3-D visualizations of the object scans made possible by OSA's ISP technology.
New Image Processing Algorithms and Public Data Sets
The path to developing new approved imaging biomarkers for lung cancer will require greater cooperation between the research, regulatory, and pharmaceutical communities. That's the perspective of the Quantitative Imaging Biomarkers Alliance (QIBA), an initiative of the Radiological Society of North America (RSNA) to improve the practicality of imaging biomarkers by reducing the variability of measurements.
New public data sets and algorithms that use this shared data are the subject of the Optics Express paper by Andrew Buckler of Buckler Biomedical LLC and a team of investigators from QIBA.
From providing a common framework for researchers to compare their algorithms to giving researchers access to larger data sets, "the increasing existence of data from cooperative efforts that can be evaluated by multiple parties enables a new generation of developments that were not possible when individuals maintained their own data sets," says Buckler.
The paper reviews early programs that collect and use these data sets, which range from scans of mock tumors in artificial torsos to clinical data from lung cancer patients. For example, QIBA serves as a steward for the NCI-funded Reference Image Database to Evaluate Response to Therapy (RIDER), a recently-developed library of clinical scans from patients with lung cancer. In one study, 32 patients were scanned twice within 15 minutes—allowing researchers to test the minimum detectable changes in tumor size. Other RIDER images show patient lesions scanned repeatedly after longer time periods. These clinical scans are available for viewing and analysis in the paper's accompanying ISP data sets.
A series of research projects based on these and other clinical data have already shown that computer algorithms can reach conclusions similar to those of trained radiologists. They have also revealed that the thickness of CT scan slices used to piece together a 3-D lesion image can have a large impact on the accuracy of lesion volume calculations.
Multiple studies—some planned, some underway, and some completed—have recruited both radiologists and software developers to analyze a series of thinly-sliced 3-D images and test the efficacy of new computer algorithms.
Sharing Phantom Data
Researchers at the FDA Center for Device and Radiological Health (CDRH) in Silver Spring, Md., have developed a publically available data set currently containing 480 CT scans (with a total of more than 5,400 scans expected in the next year) of synthetic lung nodules—small objects of various sizes of shapes embedded in a urethane and epoxy resin "phantom" containing mock human anatomy including lungs. The phantom and the synthetic nodules are built to have radiographic properties similar to tissues in the human body.
This project is one effort to create public data sets to support the efforts of QIBA.
Phantom setups like this one lack the true complexity of the actual human body, but they provide what the authors of the Optics Express paper call a "stepping stone" for testing image-processing computer algorithms.
"If an algorithm doesn't do well on this data set, it likely won't do well on clinical data," says CDRH scientist Marios Gavrielides.
An advantage of phantom studies is that—unlike images of actual tumors in human bodies—researchers know the true sizes of the objects being scanned, allowing for the error of a given technique to be measured.
The library of images provides a common ground for researchers to test the efficacy of different image processing approaches on an apples-to-apples basis. It can be viewed interactively as a series of 2-D scans or a 3-D composite using OSA's ISP technology.
Gavrielides also hopes that the data will help scientists begin to tease out and compare all of the factors that can affect the accuracy and precision of volumetric CT scanning—from the reliability of the scanner to characteristics of the nodule itself.
Since the data collection was made available in February, it has been downloaded hundreds of times and used for algorithm competitions sponsored by Cornell University and NIST.
The article, "A resource for the development of methodologies for lung nodule size estimation: database of thoracic CT scans of an anthropomorphic phantom" by Marios Gavrielides et al. can be accessed at: http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-18-14-15244
Going Open Source Against Cancer
Ricardo Avila and colleagues at Kitware, Inc., in Clifton Park, N.Y., have created a software package for analyzing medical scans called the Lesion Sizing Toolkit and a new algorithm specifically for lung lesion sizing. This algorithm has been tested on the FDA phantom scans and RIDER clinical data sets.
The volume-calculating algorithm dissects the biological features shown in CT scans, detecting lung walls, blood vessels, background tissue, and the edges of a lesion to measure tumor size.
Avila's algorithm is meant to provide a reference standard for the community—a yardstick against which other groups can measure the effectiveness of their own approaches.
"If someone else comes along and says 'I've got an algorithm that gets a certain level of performance,' the person could compare it to an algorithm that everybody knows that is completely transparent," says Avila.
The error rate of the algorithm—almost 35 percent when thick slice CT data is studied and down to 14 percent when thin slice CT is used—has already provided some lessons about the difficulties of dealing with CT data and the importance of using very thin CT scan slices.
But the most important aspect of this project, the researchers emphasize, is that—unlike most algorithms, which are kept secret and proprietary—its code is open source.
"At a scientific conference, we often wind up just talking about the results, not the science," says Avila. "One solution to this is to work in a more open-source domain."
Built on top of the National Library of Medicine's Insight Toolkit (http://www.itk.org/
), a popular open-source platform for image processing, the Lesion Sizing Toolkit and the lung-specific algorithm are meant to be taken up by other groups, modified, and improved.
The interactive ISP technology included with this research paper allows the reader to run the interactive 3-D lesion sizing algorithm and test its performance on provided data sets.