Defining the Value Research Agenda in Radiation Oncology

Oncology, Oncology Vol 31 No 4, Volume 31, Issue 4

Ultimately, meaningful value assessment is going to require unbiased comparisons between competing strategies. Decision models are extremely useful in this regard and can provide important insight into cost-effectiveness evaluations.

In his excellent review of the determination of value in radiation oncology, Dr. Konski expertly summarizes the key components of value, as well as their nuances and differing dimensions.[1] Whether one believes in socialized medicine or a completely free-market approach, it is an undeniable fact that resources are becoming progressively limited while healthcare needs are increasing in both quantity and cost. Thus, in order for a society or healthcare delivery system to successfully deliver care to its members, high-value treatments must be emphasized and low-value therapies must be phased out.

Unfortunately, while it is easy to make this simple prognostication, the critical question is where to begin in terms of efforts to identify high-value radiation treatment approaches. On the most basic level, the radiation oncology community must invest in cost assessments in order to truly understand the cost of radiation therapy. We know the fee that a department charges for various treatments and procedures, and we know how much money a payer will reimburse, but what are the actual dollar costs to deliver a course of radiation therapy? Some very nice accounting work using activity-based costing has been done in an effort to answer this question, but cost analysis in the setting of radiation oncology must be broadened to include additional disease sites and treatment modalities (eg, stereotactic radiotherapy, proton therapy, etc). For analyses taking the societal perspective, we need to move past using Medicare reimbursement-although it may be a reasonable proxy in some scenarios-and employ validated accounting methodology to understand costing in the field. A similar knowledge gap exists in the area of patient-borne costs (eg, costs related to loss of productivity, out-of-pocket transportation expenses) over the short and long term; these costs need to be carefully quantified, especially when performing analyses of competing local therapies. Although landmark cooperative group studies are actively being performed, the aforementioned types of patient-borne costs are almost never (if ever) captured. It takes substantial patient engagement (and probably patient incentives) to properly record and analyze these data, but the information generated is crucial to understanding the societal impact of any given intervention.

Similarly, long-term prospective outcome assessments are vital. As someone who is interested in the formal cost-effectiveness space, I believe we should invest our efforts in performing short- and long-term utility assessment. At the end of the day, despite understandable objections, the quality-adjusted life-year is the “lingua franca” of cost-effectiveness analysis, and it is a benchmark against which multiple modalities can be compared. It is encouraging that many prospective trials now do include a standardized utility instrument such as the EuroQol five dimensions questionnaire, or EQ-5D. However, prospective patient-reported outcome data using other validated instruments are also vital in considering more qualitative value assessments; in the updated American Society of Clinical Oncology Value Framework, a “toxicity score” is calculated based on the relative difference in toxicities between two competing therapies.[2] The more granular the known data on toxicities, the more useful is this value assessment.

Ultimately, meaningful value assessment is going to require unbiased comparisons between competing strategies. Decision models are extremely useful in this regard and can provide important insight into cost-effectiveness evaluations. In many situations, models can be definitive when they are based on prospective trial data; in other scenarios, models are the only feasible analytic approach to compare treatments that will never be evaluated in a prospective trial. Yet there is still no replacement for the randomized trial, especially since single-arm and retrospective studies are prone to bias; this selection bias is particularly evident in comparisons of different local therapies, such as surgery vs radiotherapy.

Therefore, Dr. Konski’s call to incorporate value considerations into trials-recording costs (both to the healthcare system and to patients) and eliciting patient-reported outcomes and utilities-is crucially important. Yes, obtaining this information requires additional expenditures of time and money, and perhaps even the creation of patient incentives. However, when one considers the amount of time and the level of effort required to design clinical trials, and how expensive they are to conduct, it is vital for each study to provide as much information as possible. Incorporating these endpoints into trial design provides another layer of analysis, and may powerfully inform clinical decision making, even if the hazard ratios (HRs) hover around 1.

I would further argue that not only should value assessments be included in prospective trials, but some prospective trials should also be designed that are based explicitly on value considerations. Even the “sexiest” clinical study may end up minimally affecting decision making if the statistically “winning” arm is cost-ineffective. Is it worth testing a novel but expensive targeted therapy if, even under the best of circumstances, its cost-to-outcome ratio is still prohibitive? Instead, decision analysts have a methodology-termed value of information analysis, or VOI-that can show how much more information is gained by resolving the uncertainty in a given comparison, such as the HR for recurrence or death. This “information” can incorporate both the cost and the effectiveness of the comparator therapies, such that the yield of (ie, information gained from) different studies can be compared. If a cooperative group can only fund two competing trials, and defining the HR in trial A would increase the known health benefits (incorporating cost and effectiveness) much more than defining the HR from trial B, shouldn’t the investment be made in trial A? Because this value-conscious approach to clinical trial design would maximize the long-term cost-effectiveness of the studies themselves, I hope that VOI becomes a recognized (and utilized) tool in the genesis of future multi-institutional and cooperative group trials.

In conclusion, Dr. Konski concisely and clearly summarizes the state of value assessment in radiation oncology. Fortunately, the importance of value considerations in oncology, as well as the parallel mission of developing a methodology or framework to assess value in straightforward terms, has been elevated into the community consciousness. Yet just talking about cost-effectiveness is not enough. We must formally and aggressively tackle these questions head-on, defining real costs and assessing long-term patient-reported outcomes, while ensuring that comparative trials are incorporating value parameters as outcomes, and even-dare I say it?-primary endpoints.

Financial Disclosure:The author has no significant interest in or other relationship with the manufacturer of any product or provider of any service mentioned in this article.


1. Konski AA. Defining value in radiation oncology: approaches to weighing benefits vs costs. Oncology (Williston Park). 2017;31:248-54.

2. Schnipper LE, Davidson NE, Wollins DS, et al. Updating the American Society of Clinical Oncology Value Framework: revisions and reflections in response to comments received. J Clin Oncol. 2016;34:2925-34.