Q&A: clinical trials
Bias remorse: are cancer drug trials clear enough on outcomes?
New analysis published in the British Medical Journal has found that almost half of pivotal trials forming the basis of EMA approvals of new cancer drugs were at high risk of bias based on their design, conduct or analysis. What exactly are the problems with cancer trial design and administration, and what needs to change? Chris Lo speaks to study co-author Dr Huseyin Naci to find out more.
With cancer cases on the rise globally, partly driven by the world’s growing and ageing population, it’s understandable that drug regulators are keen to speed up approvals of novel cancer therapies through mechanisms like accelerated approval pathways, which are being granted to drug development programmes with increasing frequency.
But, does the need for speed come at a cost? A cross-sectional analysis recently published in the British Medical Journal by academics in the UK, US and Canada has reiterated concerns that it does. The study analysed 39 randomised controlled trials that supported new cancer drug approvals from the European Medicines Agency (EMA) between 2014 and 2016.
Of the trials studied, nearly half were found to have a high risk of bias for their primary outcome, with the measurement of outcomes and missing outcome data being the main concerns. Surrogate measures such as progression-free survival and treatment response rates have become increasingly prominent over the gold standard endpoint of overall survival (OS) over the last two decades, and it appears that this trend is driving a lack of clarity around the actual clinical benefits of new cancer drugs coming on to the market.
Study co-author Dr Huseyin Naci, assistant professor of health policy at the London School of Economics, discusses the study’s findings and the steps that need to be taken to create a clearer picture for patients and their physicians.
Chris Lo: Was your recent cross-sectional analysis of cancer drug trials inspired by existing suspicions about the credibility of evidence around efficacy?
Dr Huseyin Naci: Only partly. We were familiar with the previous body of literature on cancer drug approvals, as we’ve done some research on this ourselves. In our previous work, we showed that the majority of cancer drug approvals were on the basis of randomised controlled trials, which we consider to be the gold standard for evaluating if a treatment works.
In this paper, we were interested in using an established, robust tool to look into the validity of these trials. While we know randomised controlled trials are the gold standard, they still can have potential problems. We wanted to evaluate those issues.
In your analysis, you found that 19 of the 39 trials you studied had a high risk of bias. Could you elaborate on your concerns around the measurement of outcomes?
In terms of trial outcomes, the best way of measuring the efficacy of a cancer drug would be to look at its effect on overall survival – how long people receiving that drug live, as compared to people not receiving that drug. The majority of drugs in our sample were not evaluated on the basis of overall survival. Instead, they were evaluated on the basis of so-called surrogate measures, such as progression-free survival or response rates, which are less objective.
This means that if the investigators who are measuring those outcomes are aware of which patients are receiving what treatments, this knowledge could potentially influence their assessment of that outcome – knowingly or not knowingly. This has been demonstrated in the empirical literature for decades now.
What were some of the other elements of trial design that you found to be driving risk of bias?
The Cochrane risk-of-bias tool that we used has five domains – the first one is randomisation. If there are any issues with the way that the trial implemented randomisation, or if the randomisation did not seem to work well, then this would put the trial at high risk of bias. There were instances where the trial was labelled as a randomised controlled trial, but it became apparent that actually it was comparing one dose of the treatment to another, so there was really no control treatment available and therefore no randomised element when it came to understanding the efficacy of the treatment.
There were also instances where there were major protocol deviations that could potentially influence the outcomes and the magnitude of the benefit that was observed. Primarily we learned about this from the EMA reports, because the trial publications rarely reported protocol deviations. But the EMA reports usually referred to major protocol deviations and whether these were something to be concerned about.
There was another issue around missing outcome data. In cancer trials, patients could withdraw their consent to continue taking part in a trial. When this happens, it’s unclear if they continue to contribute outcome data to the trial, or if their outcome data is missing. In some cases, the proportion of patients who withdrew their consent differed between the two arms of the trial, which could potentially influence the availability of their outcome data. This, we concluded, could result in a high risk of bias.
Do you think the lack of clarity around outcome data has created a confusing landscape for patients and clinicians who are considering treatment strategies?
Yes, I agree. I think there is huge misunderstanding and confusion about what different outcomes mean, even among experts. This was the focus of a recent paper published in JAMA Oncology by a Canadian team led by Christopher Booth. This team reviewed the literature on the studies that asked patients whether or not they understood the term ‘progression-free survival’, and if the patients actually valued this endpoint.
What this paper showed was remarkable: the literature on this topic seems very sparse and heterogeneous. It’s unclear whether patients understand this endpoint, let alone make judgements about how they would benefit from it, or whether a drug that does well on this endpoint would actually benefit them in the long run. I think there is a very urgent need to rethink the endpoints that are being used when measuring the effects of cancer drugs.
Is there a consensus that overall survival is the best way to measure cancer drug outcomes, and if so, why hasn’t there been more movement to make OS a requirement?
Surely, not using overall survival has benefits. By not measuring overall survival, trials could be shorter, and include fewer patients. So there are clear feasibility advantages of using alternative endpoints other than overall survival. Having said that, I think the risks of including other endpoints in cancer drug trials are important to consider. For example, we have no way of knowing if a treatment that improves progression-free survival will ultimately improve overall survival. We cite several examples in our paper of drugs that appeared effective on progression-free survival that turned out to be ineffective on overall survival.
In one recent case, in the BELLINI trial, a drug that performed really well on progression-free survival turned out to be harmful in terms of overall survival. So I think the feasibility advantages of these surrogate measures have to be balanced with the risks of ultimately not knowing if these treatments will actually benefit patients.
Novel cancer therapies are an extremely profitable asset for pharma companies, while regulators are keen to accelerate the introduction of new treatments as well. To what extent do you think these two things are driving the problem of bias in this field?
It’s really difficult to tease out the primary drivers of these recent trends. In the 1970s and 80s the majority of cancer drug trials measured overall survival as their primary endpoints. Since the 1990s and 2000s, progression-free survival and response rate have become more and more prevalent, despite any real progress in the scientific literature that links progression-free survival to overall survival, or response rate to overall survival.
And when we look at the relationship between surrogate measures and quality of life, again there really seems to be no real association. Despite what the empirical literature has shown, there seems to be a very clear preference for using surrogate measures as trial endpoints. I guess this has to be driven by something else. Those issues that you highlight probably play a part.
What are the most important steps that need to be taken to start addressing this issue and create a clearer picture for patients and doctors?
We need to do better in communicating the findings of studies like ours to patients and doctors. After the publication of our paper, we created a publicly available database for patients to understand our findings. On this website patients can interact with the data themselves, identify the drugs that they might be familiar with, and examine the risk of bias in the trials that support the regulatory approval of these drugs. Effective communication of these findings to patients is really key here.
We also wrote a blog that accompanied the publication of our paper. In this post, we call for collaboration between the regulators, patients, trialists and academics to sit around the table and discuss potential solutions. We should distinguish between what is avoidable and what is inevitable in terms of methodological deficiencies in cancer trials.