ARLINGTON, VirginiaResearch in complementary and alternative
medicine (CAM) is possible and feasible, said Andrew Vickers, DPhil,
but the key issues are practical and not conceptual. Those who work
in CAM rarely have the research skills, the technicians, or the
access to suitable patients that mainstream institutions or
researchers have, he pointed out.
Dr. Vickers, assistant attending research methodologist, Memorial
Sloan-Kettering Cancer Center, discussed methodologic issues in CAM
research with others at a session of the Comprehensive Cancer Care
2000 conference, jointly sponsored by the National Cancer Institute
and the National Center for Complementary and Alternative Medicine.
Dr. Vickers pointed out that survival is a statistically
complex outcome, and it is difficult to judge anecdotal data without high-quality
He noted the differences between phase I and II trials of CAM and
conventional therapies. A phase I CAM trial would not likely use a
substance whose toxicity was very high, nor would toxicity be viewed
as a surrogate for efficacy, as it might for conventional
In phase II CAM trials, Dr. Vickers said that a tumor response,
especially a rapid one, is unlikely, so alternative endpoints, like
progression or survival, are needed.
Ultimately, however, some form of randomized (or at least controlled)
experiment is required to establish the validity of the intervention,
If there are criticisms of CAM research, said Carmen Tamayo, MD,
director, Complementary and Alternative Medicine Division, Foresight
Links Corporation, they concern the placebo effect, short-term
trials, and reporting bias. Doctors and patients have to ask:
Is the outcome applicable elsewhere? Is it reproducible? Have
cultural differences been accounted for? she said.
Dr. Tamayo believes that objective information can still be
gained from trials that are not randomized and double-blinded.
At the very least, she said, observational studies can influence the
design of clinical trials.
Alejandro Jadad, MD, DPhil, who has written a book on randomized
controlled trials, said that while such trials remain the gold
standard by which to judge the quality of medical research, even they
must be used cautiously.
Dr. Jadad is chief of the Health Information Research Unit, and
director of the McMaster Evidence-Based Practice Center at McMaster University.
He argued that very few trials address issues important to
stakeholdersie, patients. Too often they are designed to meet
only the needs of researchers or government regulators.
There is an instinctive selection of the most promising
therapies, which results in excellent phase I/II trials, and phase
III trials that look good vs placebo, he said. Once approval is
granted, marketing begins (more often, directly to consumers)and
then defensive postmarketing research gets underway.
We should concentrate instead on pragmatic trials to help
patients and purchasers, he said. This would include not only
excellent phase I and II trials but also phase III trials that test
the therapy against both placebo and the best available alternatives.
The So What? Test
In any case, he said, every test should have to pass the So
what? test. Does it work? Does it work better than other
options? Is it equally effective, but safer? Or as safe but more
effective? Or the same but cheaper? Is the effect worthwhile?
Finally there is the question of bias, in its many forms. Randomized
controlled trials tend to favor interventions, he said. Studies
funded by industry have been shown to be more poorly designed than
federally sponsored trials and to be more likely to favor the
Negative studies are less likely to be published, and more studies
with positive results are likely to be published in English. One way
to overcome this particular bias, he said, would be to have
compulsory registration of clinical trials, now the case only with
federally sponsored research.
Finally, Dr. Jadad said, the interpretation of trials is vulnerable
to manipulation by readers. Journalists tend to favor positive
results, as do desperate patients. Even scientific readers can
introduce a biased view of trial reports when professional rivalry,
territoriality, the influence of prestigious journals, and the desire
to just do something are strong.