A tumor response assessment core (TRAC) was created by the University of Michigan (U-M) Rogel Cancer Center in an effort to improve data quality of imaging assessment and decrease investigator and coordinator/regulatory workload for clinical trials in oncology.1
The approach, which was first implemented at U-M in 2016, was created to limit distortions due to individual variability and unintentional bias on the part of the physicians interpreting the scans.2
“We care about our patients. We want them to do well. We want to keep them on trials. We want to believe our care is helping them,” study senior author Vaibhav Sahai, MBBS, medical oncologist at Michigan Medicine, said in a press release. “So, it can be sometimes hard to do an unbiased assessment – which is what the clinical trial and the patient both deserve.”
The workflow includes an image analyst who pre-reviews scans before review with a board-certified radiologist, and then manually uploads annotated data on the proprietary TRAC web portal.
In an assessment of TRAC, published in the Journal of the National Comprehensive Cancer Network, 49 participants with lung cancer were enlisted (53% female with a median age of 60 years [range, 29-78 years]), and 2 patients were excluded due to the inability to meet criteria. Primary lung cancer morphologies included in the study were non-small cell (n = 31), small cell (n = 8), squamous cell (n = 4), and bronchogenic (n = 4).
A linearly weighted kappa test for concordance for TRAC versus radiologist was substantial at 0.65 (95% CI, 0.46-0.85; standard error [SE], 0.10). The kappa value was moderate at 0.42 (95% CI, 0.20-0.64; SE, 0.11) for TRAC versus oncologists and only fair at 0.34 (95% CI, 0.12-0.55; SE, 0.11) for oncologists versus radiologist.
Notably, the TRAC approach also improved the efficiency for analysis of cancer clinical trials at U-M, with the turnaround time for tumor measurements decreasing from 33 days to 3 days.
“The mission of TRAC was to create independent, unbiased, and verifiable measurements of our patients’ response during clinical trials, and the results of our study show that this approach lives up to that goal,” Sahai said. “We published a detailed explanation of the workflow and the software we created in hopes of being a model for other cancer centers, and thus to help improve the accuracy of clinical trial results for patients everywhere.”
According to investigators, only a small number of cancer centers across the country have developed similar systems, with dedicated imaging cores and web-based platforms. These data potentially underscore the necessity for improved imaging criteria training for medical oncologists, consideration for radiologist interpretation, and/or development of an imaging core for response assessment.
The lack of uniformity for obtaining quantitative imaging assessments is partially due to budget constraints, lack of standardized reporting, and potentially insufficiently written clinical trial protocols. The role of a single image analyst (IA) per trial could serve as a catalyst in the relationship between radiologists, oncologists, and clinical staff, thereby enabling improved reliability, reduced interreader variability, and quicker turnaround time without experimenter bias.
“More robust data in turn provide greater confidence in determining therapeutic tumor response in clinical trials, which is crucial in this era of precision medicine with smaller cohorts,” the authors wrote.
1. Hersberger KE, Mendiratta-Lala M, Fischer R, et al. Quantitative Imaging Assessment for Clinical Trials in Oncology. Journal of the National Comprehensive Cancer Network. doi:10.6004/jnccn.2019.7331.
2. U-M Approach Could Improve the Accuracy of Cancer Clinical Trials [news release]. Ann Arbor, Michigan. Published December 26, 2019. newswise.com/articles/u-m-approach-could-improve-the-accuracy-of-cancer-clinical-trials?sc=mwhr&xy=10021790. Accessed January 3, 2020.