We drafted the article below in early July shortly after Robert F Kennedy Jnr’s new vaccine panel (APIC) made a decision to recommend a new monoclonal antibody treatment for RSV in infants. We had been given access to mortality data from four randomized controlled trials, performed by the manufacturers, that had been undertaken to evaluate the the new RSV vaccine. In each of the trials the mortality rate in the treatment arm was higher than in the control arm and, intuitively, it would therefore appear that this should have raised serious safety concerns and stopped the approval of the novel treatment. However, because the overall number of fatalities was low, classical statistical tests did not show a statistically significant difference in the mortality rates of treatment versus control.
This appears to have influenced the panel decision.
We performed a Bayesian analysis – which enables us to make meaningful and useful probabilistic conclusions without using statistical significance tests and more appropriate when there is paucity of data .
We concluded that there was a 75% probability that the treatment would cause increased mortality. However, we decided to postpone publishing the paper until we had more detailed information about the randomised controlled trials, including about non-fatal reactions, the efficacy of the treatment in reducing RSV, and the deadliness of RSV generally in infants.
Last week Maryanne Demasi PhD did an extensive analysis of the trials data, having access to far more detailed information than we had when we drafted our article. She reported that there was a statistically significant signal — nearly a 4-fold higher risk of seizures shortly after the injection.
Dr Demasi’s analysis is excellent and, of course supports, our own conclusion that the treatment should not have been recommended. But, as it did not contain any Bayesian analysis, we now feel that it is worth publishing our article even given extremely limited (and possibly inaccurate) data we had about the trials.
Here is the article we drafted:
The decision
On 26 June 2025 Robert F Kennedy Jnr’s new vaccine panel (APIC) made a decision to recommend a new monoclonal antibody treatment for RSV in infants. The decision was controversial for several reasons, not least because the seven members of APIC who were tasked with the decision were all newly appointed by RFK Jnr and are all especially concerned about the over-medicalisation of American children. Only two of the seven members objected to the recommendation. One of these was Professor Retsef Levi who was concerned about the lack of evidence that the new treatment was safe.
In fact the five members of the committee voting in favour were apparently satisfied that there was no persuasive evidence that the treatment was unsafe, since very few all-cause deaths were recorded in the relevant trials data.
It is true that, using classical statistical significance testing, the mortality rate for the treatment groups is not ‘significantly’ higher than the control groups. However, using a Bayesian approach, we infer from the trials data that, even under conservative assumptions, the probability that the mortality rate is higher in the treatment group is 75%. We can also infer that (although there is a very wide uncertainty range) there is median number of 24 additional all-cause deaths per 10,000 children receiving the treatment.
The trials data
Here are the raw data from the four relevant trials of the new RSV treatment in infants:
Before summarising the Bayesian approach and results, note the following:
-
In each of the four trials the ‘raw’ mortality rate is higher in the treatment group than the control group and the comparative numbers are not even close except for Merck1. So, intuitively, this should have raised obvious safety concerns.
-
The very different raw mortality rates across the four trials are indicative of the cohorts in these trials being materially different in one or more undeclared ways. Indeed, the raw Medley rates are higher than the raw Melody rates because the former participants were considered ‘high risk’ while the latter were considered ‘healthy’.
-
The numbers of deaths are low and this is the reason why, using classical statistical significance testing, we do not reach a conclusion that the mortality rate in the treatment group is higher than the mortality rate in the control group (such as under a 5% p-value). Further, the Melody trial has zero deaths in the control group which would make classical testing meaningless.
The Bayesian analysis
One of the great benefits of Bayesian analysis compared to classical statistical testing is that we can draw meaningful conclusions from very limited amounts of data (such as in this case the limited number of trials and small numbers of, or even zero, deaths). Unlike the classical approach we can infer a probability of the truth of a hypothesis; in this case, it means we can infer a probability that the mortality rate for those receiving the new treatment is higher than that of those not receiving the treatment.
Our analysis proceeded to compute the Bayesian mortality rates (and other associated statistics):
-
Using a naïve approach where all the studies are pooled, as if they were from one single ‘big’ study
-
By applying a meta-analysis to take account of variation between studies (in Bayesian parlance this is called a hierarchical model)
Pooled approach
The pooled approach assumes we observe a total of 5,647 treatment participants of which 25 died and 2,958 control participants of whom 8 died. We then ‘learn’ the respective population probability distributions for the treatment and control groups. Assuming uniform priors for these distributions and Binomial distribution for the mortality rates, from the results of Bayesian inference we learn:
-
the mortality rate, p, per 10,000 treatment participants is a distribution with median value 47 with a 95% confidence interval [31, 67]
-
the mortality rate, q, per 10,000 control participants is a distribution with median value 29 with a 95% confidence interval [14, 53]
This results in a ‘risk ratio’ (the former divided by the latter, p/q) with median value of 1.6 with a 95% confidence interval [0.77, 3.66]. Because the lower bound of the confidence interval is below one this means the difference in the mortality rates is not considered to be ‘significant’, but from our analysis we infer that there is an 89.4% probability that the treatment mortality rate, p, is higher than the control mortality rate, q.
However, this pooled approach is only meaningful if we can assume that the four trials involve participants sampled from identical populations. As we have previously implied, this is unreasonable because in each case the cohorts were very different.
Meta-analysis approach
This means we have to take a much more conservative meta-analysis approach that recognises the differences between the cohorts and involves learning more ‘hyper-parameters’, to account for this additional uncertainty, rather than just the single mortality rate parameters. Specifically, we assume that the learnt mortality rate distributions, for each study, are each respectively Beta distributions which involve two parameters, which are themselves unknown (but can be estimated). With this approach we still use Binomial assumptions to learn the probability that the treatment group mortality rates are higher than the control group mortality rates, ‘locally’ for each study. But, the meta-analysis approach has the advantage, we can estimate ‘global’ mortality rates, for treatment and control that explains the mortality rate across all of the studies (plus what we might expect to see from an unreported, hypothetical or a future study)
This meta-analytic approach to Bayesian inference enables us to learn from these the relevant hyper-parameters from the variation within and between the individual trial mortality rate probability distributions. However there is a price to pay – the result is much more uncertainty in the final results because it includes the uncertainty from the differences in the populations across the trials. Hence we think this approach is conservative.
Under this meta-analysis model the probabilities that the individual trial treatment group mortality rates are greater than the control group mortality rates are respectively:
-
Melody: 89%
-
Medley: 84%
-
Merck 1: 72%
-
Merck 2: 85%
From the meta-analysis the overall probability that the mortality rate is higher in the treatment group than the control group, for any trial, is 75%. This is because the overall learnt probability distribution for mortality rates is highly skewed and uncertain, specifically:
-
the mortality rate, p, per 10,000 treatment participants is a distribution with median value 54 with 95% confidence interval [13, 150]
-
the mortality rate, q, per 10,000 control participants is a distribution with median value 29 with 95% confidence interval [0,111]
-
This results in a ‘risk ratio’ (p/q) with median value of 1.86 and 95% confidence interval [0.26, 30].
From this meta analysis model we can also infer that there is median number of 24 additional deaths per 10,000 infants receiving the treatment than those not receiving the treatment (although there is great uncertainty around this number as the 95% confidence interval is [0, 124]).
This chart compares the risk ratio distributions for each of the trials (the ratio p/q), and for the meta analysis:
The green line shows the value “1” – this is the risk ratio you would expect to see if the treatment and control mortality rates were equal (i.e. p = q , hence p/q = 1). The greater the area under a given graph to the right hand side of this ‘safety threshold’ the higher the treatment mortality rate is compared to the control, and greater signal the treatment is unsafe.
Note that testing the hypothesis, using this conservative meta analysis model, results in a confidence interval for the risk ratio that is wider than the pooled estimate, but despite this the probability that p is greater than q is 75%.
Our finding that the treatment causes higher mortality is surely a very worrying safety signal.
Conclusion
Assuming the data are accurate, an analysis of the data from the four relevant trials of the new RSV treatment clearly show a mortality rate higher in those receiving the treatment than that in those who do not. So, even on intuitive grounds, the APIC decision to consider that the new treatment is safe for infants seems strange. However, because the total number of deaths in each study is small, classical statistical significance tests would have rejected the hypothesis that the mortality rate in those treated is significantly higher than those not treated.
Using Bayesian analysis, rather than classical statistical testing, which enables us to infer meaningful probability conclusions from the limited data, the decision seems incomprehensible, if not untenable. We have shown that even with the most conservative assumptions it is likely, with 75% probability, that the mortality rate in the treatment group is higher than the control group and therefore taking the treatment would result in a median number of 24 additional deaths per 10,000 infants (although we acknowledge there is great uncertainty around this number).
Whichever way we analyse this data there is certainly no evidence that the treatment is safe. Given this we are very surprised it has been approved.
Click this link for the original source of this article.
Author: Norman Fenton
This content is courtesy of, and owned and copyrighted by, https://wherearethenumbers.substack.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.