You are totally free to choose your own favourite food commodity for this, what appears at first an odd, question from yours truly. I have selected dairy goods, and in particular cheese.

Whatever you decide, this phrase, which some might consider slightly sarcastic, is used the world over as a riposte, often in a ‘robust’ discussion. I use it to simplify the biggest challenge that all scientists face, ie is an observation real or is it just perhaps what we or others ‘want to see’?

Specifically I attempt to apply this rather blunt analogy to summarise the current status of the search for MND/ALS biomarkers which is an exciting, vital and vibrant area of research.

Biomarkers are going to be big news for MND/ALS. For a disease with no real diagnostic or disease measuring tools other than the incredibly experienced eye of the time weary and battered neurologist, solid objective measures of disease progression, eg similar to cancer tumour size or HIV viral count, are needed to accelerate truly effective treatments.

As my readers will be aware I have been heavily influenced in my articles by misinformation that swirls about on the internet, and beyond, concerning MND/ALS research and trials. However, such misinformation has given me the opportunity of exploring and perhaps helping to explain to our vulnerable community the scientific challenges of our disease. 

Yes, this is another ‘Devil is in the detail’ post.

Before starting, you may be interested in the history and a comprehensive list of the current investigative biomarkers for our disease. You could do no better than have a thorough read of the paper written by Nick Verber et al in 2019 which represents an excellent summary of the current landscape.

I am going to approach this post in my typical manner and I will start by asking you to initially not to envisage the term biomarker, but rather ‘biomeasure’. Why?

One source of ‘light’ misinformation is the use of language, or wordage. When the very word ‘biomarker’ appears in press releases, or even some scientific papers, there is perhaps by implication that these ‘biomeasures’ are indeed actual validated & verified disease ‘markers’ ready for full clinical use.

This is far from the scientific reality and decades of all disease research are witness to the fact.

Let’s talk about biomeasures

Blood cholesterol and blood sugar levels are just two measures we are all too familiar with, that are deeply researched over many decades and both can point to serious health issues and are well understood with ‘normal’ and ‘concerning’ ranges documented.

Our genetic code, the wonderful human genome, which was discovered in 1953, but only first started to reveal its secrets in the 1990s, is a rich treasure trove of biomeasures. For example, a person who has one of several recognised SOD1 gene variants(s) has a much higher risk (if not almost guaranteed) of developing MND. Such ‘causative’ gene variants are indeed validated biomarkers to identify persons at risk and who might possibly benefit from future targeted treatments. So that’s a tick for the good! We have some real genuine biomarkers for identifying some at risk.

However, significant challenges face researchers in the search for biomarkers for objectively measuring disease progression

These challenges can, perhaps, be more readily understood if we ask ourselves two simple questions:

  1. What is the purpose or use that’s needs a marker, for eg. Diagnosis, measuring disease progression etc.
  2. Can tests be developed that are fit for that purpose.

Vital answers to many many more questions must then be fully distilled before ‘fit for purpose’ can be hoped to be achieved, including…

  1. Does/how is a measure correlated to a proposed purpose?
  2. Are tests available/to be developed for a measure, precise, accurate, repeatable, reliable and above all interpretable for that purpose?
  3. What constitutes a clinical difference/readings for that purpose?
  4. What readings are normal, what is the range of normal, do readings fluctuate (for an individual and/or population)?
  5. Is a measure a direct indication of the disease or possibly just a consequence?
  6. Does altering a level through a treatment change the disease?
    eg Treatment X might lower a measure simply by breaking it down, but that doesn’t mean the measure has anything to do with the disease.

Yes, this is one huge challenge!

So what is the story now in late 2024 with Motor Neurone Disease (ALS)?
What does the cheese smell like?!

Currently, there is only one biomeasure that can be considered as approaching being a bonafide biomarker, and that is neurofilament light chain (NfL). NfL levels are a measure of axonal breakdown, ie degeneration. It can now be measured via a blood test (as well as cerebral spinal fluid (CSF)), although although the tests still need some improvement and there can be a lot of “noise” in many readings.

There are also still major caveats in a reading’s interpretation, as Michael Benatar, Joanne Wuu and Martin Turner wrote in their excellent and very readable 2023 paper. The trio comprehensively summarised the state of the art on the subject, along with some ‘insightful’ observations of the global research environment.

Neurofilament light chain (NfL) has emerged as a leading candidate with enormous potential to aid ALS therapy development; it is, however, also profoundly misunderstood.

Michael Benatar, Joanne Wuu and Martin Turner – 2023

Here’s my quick, visual, précis of the ‘state of the art’ with NfL? I will shamelessly plagiarise some of the excellent content from Michael, Joanne and Martin’s paper.

NfL levels appear to be fit for purpose for prognostic outlook.

There is ‘good’ evidence and compelling global scientific consensus that measured NfL levels are a strong indicator of the ‘speed’ of disease progression (higher levels being typically faster) but with some reservations as detailed in a couple of excellent papers Thompson et al and Benatar et al.

Given that one major problem that ALS/MND trials suffer from is assessing patients prognostic outlook in a trial setting, NfL levels feel like an instinctive screening/analysis value.

I personally would go as far as saying that trial inclusion should perhaps include a prescribed ‘range’ of readings that allow a sufficient sample size to increase chance of a conclusion. Put simply, NfL levels should be used to ensure that the most appropriate group of patients as possible are enrolled in a trial.

Should this detail be put in the hands, or in front, of patients? Such a use of information, in my opinion, just feels inappropriate at best.

NfL levels might be fit for purpose as a drug/treatment efficacy indicator

There is evidence, although only 1 trial has produced solid supportive results so to date (tofersen for SOD1-MND), that lowering NfL might reasonably predict a change of disease course for the better. However, this could be very dependent on the stage of disease. In addition to this emerging evidence for our disease, there are well accepted ‘effective’ treatments for both HIV and MS that have shown significant NfL reductions. So the cheese smells quite good!

We are certainly at a stage of research, in my opinion, that any trial would be almost negligent if NfL levels were not either used in selecting patients or being used in analysis and ongoing trial measures.

NOTE OF CAUTION

One thing to be very cautious with is any report using percentages that attempts to describe a change in NfL readings but doesn’t go into detail and has perhaps not been peer reviewed. NfL concentrations are measured in pg/ml (picograms per milli-litre). A picogram is a one-trillionth of a gram! These are tiny amounts and a small absolute change could still be represented as a large, perhaps, very misleading percentage.

NfL levels do NOT appear to be fit for purpose for diagnosis (certainly a grey light area!)

Perhaps counterintuitively, measured NfL levels do not appear to add much to a neurologist’s kit bag during diagnosis. Levels can tell us that something is going wrong with our neurons which could someday expedite referral to a neurologist or accompany specific biomarkers to aid in diagnosis. However, the non-specificity for MND only means it isn’t very helpful on its own. Diagnosis is still largely clinical (and may always be) and the general current consensus is that testing NfL will not speed actual diagnosis. Davies et al carried out a real world prospective study and documented the results in their paper Limited value of serum neurofilament light chain in diagnosing amyotrophic lateral sclerosis. It’s well worth a read.

NfL is not new to neuroscience and was discovered over 30 years ago but tests until the last few years were far from perfect and this has been a major inhibitor in its experimental use. It’s only now with the latest test technology that results can be regarded as moving towards fit for purpose. However, it is still complex enough that it can only be realistically considered a research test. I hear that in the USA, many patients are currently offered NfL testing. This is very troublesome given our knowledge of the measure.

Could NfL levels help in not only measuring effectiveness but also selecting drugs for those expensive and long human trials?

A potentially very exciting use of measuring NfL levels is the soon to start drug screening platform EXPERTS-ALS which aims to litmus test whether drugs are effectively ‘worthwhile’ taking forward to those hugely expensive pivotal efficacy trials. We could regard it as almost pre-clinical validation in humans. Currently the go/no go for a pivotal human trial (a P3) from a P1/P2 is often made on little more than opinion based on a series of unreliable and usually underpowered clinical measures like ALSFRS-R. Note: It is also possible to measure NfL changes in mice for example, yet another tool in researchers kit bag?

But isn’t there controversy surrounding NfL? 

Yes there does appear to be some, but, I would contend, is it really that controversial?

The recent history of prominent trials is, indeed, ‘chequered’. As mentioned earlier in this post, the only trial that has actually shown a significant NfL reduction along with a corresponding and correlating clinical disease change is tofersen for SOD1-MND. 

And when the earlier Amylyx phase 2, that led to a ‘rather premature’ approval of AMX0035 (eventually disproven by the phase 3 in early 2024) two eminent neurologists, backing the approval of Amylyx at the time, were perhaps a bit dismissive that changing NfL levels could affect/indicate a change of disease course in their supporting editorial/paper for the eventual flawed approval? Given that at the time this was written there was already emerging solid evidence from the tofersen trial and long OLE, I found this editorial a bit odd, to be honest. It should also be noted, that no evidence of NfL reduction was detected in the Amylyx trials and the Amylyx drug was eventually shown to be not effective for ALS/MND.

There have been observations that some fast-progressing and slow-progressing patients have contradictory NfL readings, eg low when high expected and vice-versa.

Our community has to brush such apparent divisions in the scientific space aside, given our predicament, and encourage our researchers to investigate and hope scientific consensus will be reached.

And could some of the discordant cases just be a case of ‘exceptions prove the rule’?

Part of the uncertainty which appears to exist within some scientific circles regarding NfL revolves around the patients where NfL levels appear low but a patient dies rapidly or prematurely and the opposite, high levels, but lives beyond expectations.

Isn’t it easy to postulate, for eg, that some of the higher NfL level reading patients that survived longer were limb onset and the disease took longer to reach their lungs? Maybe some patients had environmental reasons for anomalous readings or something else that keeps the science intact, ie could research into the exceptions help prove the general rule? In their 2024 paper, Meyer T, Dreger M, Grehl T et al performed an examination of nearly 2000 patients, NfL levels, disease onset, presentation and survival. This hints strongly at such a hypothesis and how ‘variable’ NfL differences might arise. This paper is worth even a cursory read! Just look at the percentage differences across the board!

This one paper alone just shows how much salt you might need to take when reading some drug treatment candidate ‘claims’ showing a percentage reduction in the new NfL biomarker with only a small number of patients and no consideration for such factors.

Concluding…..

NfL levels are now widely viewed as the potentially first truly useful biomarker for disease measuring (subject to the fit for purpose conditions described earlier and that a corresponding clinical positive change is observed). Does this mean if not lowered, a treatment is eventually ineffective? No, not necessarily, but at the same time, it is perhaps looking increasingly untenable that a treatment candidate that doesn’t lower NfL in the long term is going to be truly effective.

That’s all for now folks, and I leave you with this message.

I am confident that the price of cheese will be much better understood in the future as we start to smell the aromas more and more!