Clinical trials produce data. Lots and lots, oodles and oodles. More than we can ‘shake a stick at’! It is analysed, reduced and formed into presentations, press releases, charts and peer-reviewed papers.
In the 21st century, to add to our challenge, data is then often trans-mutated into short sound bites which can become themselves new ‘truths’, often far from their original contextual use. They are then commonly shared on the powerful, but yet brutal opinion powerhouse of our time, social media.
Simultaneously, a trial’s detailed data (known as the point data), which is not really newsworthy or sound bite attractive, is typically unseen, excepting to regulators and possibly peer reviewers. This divergence of data visibility should not be overlooked. Regulators most certainly have more sight of detail than we, the public, will. We should always take a cautionary step back before reacting to a regulator’s comments and consider the reasons just why they might have been made.
But what can we do as patients to give us a wider perspective?
How can we both protect ourselves from the sound bites and, more importantly, get behind the data façade to prepare ourselves and others before commenting?
I have written on several aspects of MND/ALS clinical trials in my blog and this article (and the following post next week) I go further, with thoughts, or perhaps a survivors guide, to asking questions on trial data. As ever, I hope it gives you that desire to dig deep, but also for those who are newly diagnosed to help filter the wheat from the, sadly, ever present chaff. I will leave a lot open for you to investigate yourself, and please see these 2 posts more as ‘thoughts’ for chewing on, rather than a full discussion or opinion piece. These questions will most certainly feature in regulators agendas, I am sure, and much much more besides.
To start with and to get us all on the correct page, we should all endeavour to obtain the original and official presentations, reports and statements to review.
To set the scene, and create a bit of intrigue, I have a question for you all.
Do you know which MND/ALS clinical trial these results are from? I have removed any clues. The chart shows the ALSFRS-R rating averages of placebo and treatment patients along with confidence interval ranges over a period of 18 months. Looks exciting doesn’t it?

I will come back to this trial in the second post in a weeks time, so on with those questions.
Did the trial meet its objectives or goals?

This is a very simple question, but still perhaps the most important. It is a simple yes or no answer.
If no, then further investigation will almost certainly be required or the treatment might be abandoned as a failed trial. By ‘further investigation’, this will mean that additional controlled trials(s) will be required unless significant, compelling and above all widespread consensus gaining conclusions can be made from the available data.
So, what if a trial was declared as a failure?
Was there any subgroup(s) of patients identified that ‘might’ have benefited despite the trial overall failing?

What we need to look for here is how was such a group(s) identified. Was it a pre-specified (within the published trial protocol) subgroup, with enough patients, to show significance? Or was it post analysis? Post (hoc) analysis is a thorny and complex subject fraught with errors. You can ‘prove’ almost anything with the technique, and so much so that it is often known as torturing data!
Take a read of the astrological star sign example in this paper on post analysis. It should be a salutary lesson to us all!
If post analysis is used, the default way forward is always ‘further investigation’ as post analysis can only be hypotheses generating at best.
What was the size of the trial, ie how many people?

This is a hugely important factor, but must be taken into consideration with other attributes. Typically, but not always, a good sized MND/ALS trial at phase 3 with current methodologies, capabilities and the state of evolution of biomarker technology will need about 200 or more patients across the treatment and placebo arms.
How long was the trial?

As with patient number, the length of a trial is very important but must be taken into account with other factors. A ‘good’ phase 3 (with or without an OLE) will need to run for about 18 months, fully time contiguous with an appropriate sized patient cohort.
A 6 month trial with only 50 patients in total will just not cut the mustard despite what you might hear otherwise. It is simply not sufficent enough to establish whether a hint is worth investigating or, quite possibly worse, risks the discarding of a valid prospect. There is a fine line between false positives and false negatives. We all hope, that as time moves forward biomarker solidification helps to ease these limitations.
How contiguous is the data?
Contiguous, ie in a continuous timeframe, data is critical. What is the longest period of uninterrupted data for the majority of patients? Has there been a gap of months or even years between treatments for patient(s) in a trial process? Always look for the longest period of reporting, especially if patient numbers drop.
Did the trial have an OLE or EAP?
The more data a trial can produce, over a greater period of time, the better. You may stumble across a mention of an OLE and/or an EAP in your readings of trial reports.
An OLE (open label extension) is an addition to a trial that is operated under full trial conditions, safety controls and protocol, and registered/defined before trial initiation. At the end of a placebo controlled trial section the extension is offered to all participants in the trial. All subjects will then receive the treatment with no-one being administered a placebo. A suitable phase 3 length might be 12 months with an additional 12 or 24 months OLE. An potentially acceptable length would be 6 months plus 12 or more months OLE with an appropriately sized patient cohort. An OLE is a powerful, ethical, and effective data enhancing add on to a core trial.
An OLE is a fundamentally distinct and different concept to an EAP, which you may hear of in different countries. An EAP (expanded, or compassionate, access program) is an outside of trial protocol, not registered at the start of trial, and in its simplest form is a method of a patient(s) gaining access to a drug outside of a trial protocol. But the simplicity ends there.
Globally EAPs are defined quite differently and have typically been focused on compassionate, generally single person use. There are, however, significant challenges with EAPs for interpretation of any data that might be gathered, which all stem from being outside ‘strict’ trial control. A good summary of the subject can be read in this excellent paper. What can at first appear powerful throws up many tough challenges.
An EAP might provide useful data, but it is not considered trial quality, especially if for a very limited number of patients. If limited, there could be questions of how patients were chosen for the program, for example, especially if there are any results to be interpreted alongside trial data. An EAP, for example, could also produce different/conflicting results versus any parallel trial run under stricter conditions which could in turn have compromising implications for regulation/assessment/approval.
There is currently a lot of discussion on the principles of EAP both within and outside of the MND community, and I suspect there will be moves towards refining what ‘they are’, what ‘they are not’ and how they might fit within/alongside trial protocols.
In part 2 next week, I will discuss disease measures, patient uniformity, side effects and burden. I will also reveal what that trial was in the chart earlier. Have you any idea yet?
Back in a week, readers.
Next week more questions in part 2
