Computer-Aided Evaluation of Tumor Response Could Benefit RCC Patients

Article

Computer-assisted tumor objective response evaluations for patients with metastatic renal cell carcinoma can save time and reduce errors.

Computer-assisted tumor objective response evaluations for patients with metastatic renal cell carcinoma (RCC) can save time and reduce errors, according to a retrospective study (abstract 432) presented at the 2017 American Society of Clinical Oncology (ASCO) Genitourinary Cancers Symposium, held February 16–18 in Orlando, Florida.

“Computer-assisted response evaluation reduced errors and time of evaluation, indicating better overall effectiveness than manual tumor response evaluation methods that are the current standard of care,” said Brian Allen, MD, of Duke University Medical Center in Durham, North Carolina.

Objective tumor response assessment using CT images is used to assess treatment outcomes and metastatic disease; reducing errors and ensuring more timely assessments could benefit patients. RECIST v1.1 criteria is the most widely used approach to assessing tumor response; it is based on a standard of care involving visual assessments of tumor length in two-dimensional CT images. However, unidimensional measurements fail to capture important biologic information about metastatic RCC tumors that are available in CT scan datasets, including vasculature.

Other criteria are also used to assess objective tumor response. Choi criteria use percent change in tumor length alongside CT density (x-ray attenuation) measures. The Morphology, Attenuation, Size and Structure (MASS) criteria capture information about tumor size and necrosis. However, these visual assessment–based criteria involve numerous manual steps that can yield slow turnaround times. For example, of 15 steps or processes in RECIST v1.1 criteria assessments, only archiving of annotated images is currently automated.

“Tumor size changes are often delayed and underestimate tumor response in patients with metastatic RCC treated with antiangiogenic targeted therapy,” Allen noted.

The researchers evaluated and compared manual tumor response evaluation methods (RECIST v1.1, Choi, and MASS) vs the computer-assisted response evaluation (CARE) system, an interactive guided user error detection software program. CARE offers six automated steps in RECIST v1.1 evaluations (data transfer and recording; archiving of annotated images; calculation of total tumor burden; calculation of percent change vs baseline or nadir; determination of objective response; generation of tables and graphs; and archiving of tables, graphs, and key images to electronic medical records). For the remaining nine steps, CARE offers computer assistance and error detection.

“Errors are identified in real time and the user is prompted to correct them before moving on to the next step,” explained Allen. “A wide variety of tumor metrics are also automatically extracted from free-form regions of interest. All data are automatically archived and all calculations are automated.”

Readers review an instant CARE summary report and can make changes to that report.

Using digital CT images from 20 randomly selected patients who had received sunitinib in a phase III prospective trial, the research team retrospectively examined the effectiveness of manual and CARE RECIST v1.1 evaluations. Eleven trained academic radiologists from 10 different institutions served as CT image readers. Their error rates and interpretation times were compared for the manual and CARE evaluation methodologies.

Measurement errors were more common among manually evaluated tumors than CARE-guided evaluations, with statistically significant differences in detected and reader-corrected lymph node long-axis measures (P = .035) and identification and selection of lymph nodes smaller than 1.5 cm in their short axes (P = .043). CARE also offered several significant improvements over manual evaluation in data transfer and calculation steps, including calculation errors for percent change in tumor size and incorrect calculation of percent change in CT attenuation (P = .043 and P < .001, respectively).

Among manually evaluated images, 30.5% of patients had at least one error, said Allen. CARE evaluations were not associated with any errors. CARE also improved efficiency, due to automation of steps in the evaluation process.

Mean patient evaluation time with CARE was twice as fast as the standard-of-care method (6.4 minutes vs 13.1 minutes; P < .001), noted Allen.

Related Videos
Two women in genitourinary oncology discuss their experiences with figuring out when to begin a family and how to prioritize both work and children.
Over the past few decades, the prostate cancer space has evolved with increased funding for clinical trial creation and enrollment.