Editor’s note: Once again this article is too long for most email systems so please click on the headline to read the full piece on the Substack site.
Introduction
In my last article, I presented ten practical and material criticisms of Evidence-Based Medicine (EBM). But there are even larger metaphysical, ontological, and epistemological problems with EBM. Numerous authors make the case that EBM and evidence hierarchies elide important debates in the philosophy of medicine. In this article I will review seven philosophical debates in connection with EBM and evidence hierarchies including:
1. Hierarchies are not how causation in science is usually constructed;
2. Evidence and interpretation are two different things;
3. The inferential gap may be unbridgeable;
4. Bayesian statistics has long since proven superior to the frequentist statistics relied on by RCTs;
5. Science can never prove hypotheses, only refute them;
6. Actual medical practice is necessarily pragmatic and different from the objectivism of EBM; and
7. Medicine is a practice not a science per se.
1. Hierarchies are not how causation in science is constructed
Several authors have noted that EBM tends to overlook and ignore the contributions of basic science (also called “bench” or “fundamental” science and I will use these three terms synonymously in this section). Bench research is defined as “any research done in a controlled laboratory setting using nonhuman subjects. The focus is on understanding cellular and molecular mechanisms that underlie a disease or disease process” (“Bench research”, n.d.). Merriam Webster’s Dictionary defines basic science as, “any one of the sciences (such as anatomy, physiology, bacteriology, pathology, or biochemistry) fundamental to the study of medicine” (“basic science”, n.d.).
The CEBM evidence hierarchy lists basic science as the fifth level of evidence, below the threshold suggested by Strauss et al. (2005) and others as even worth reading. To be clear, the CEBM and other evidence hierarchies are not excluding bench science entirely from the study of medicine — they are proscribing the consideration of bench science by doctors when they make clinical decisions (presumably others, namely pharmaceutical companies and academic researchers would be free to continue with a more comprehensive approach). Excluding basic science in this way is an odd choice because basic science has always been an essential component of establishing causation.
Bluhm (2005) writes,
[B]ench [laboratory] research and clinical (epidemiological) research are intimately related. The history of epidemiology shows that advances in one of these aspects of biomedical research often depends on advances in the other; this point is particularly clear in the case of infectious diseases but is equally important for understanding chronic disease (p. 538).
Bluhm (2005) argues that EBM should move from hierarchies of evidence to “networks of evidence” in which both epidemiology and lab-based biochemistry work hand in hand (p. 535). It is a fine point as far as it goes, but it strikes me that one could push this idea of networks of evidence even further — to include the subjective wisdom of both doctors and patients as well. I will elaborate on this point later in the article.
Rawlins (2008) writes:
Hierarchies attempt to replace judgment with an over-simplistic, pseudo-quantitative, assessment of the quality of the available evidence…. Hierarchies of evidence should be replaced by accepting — indeed embracing — a diversity of approaches (p. 586).
Goldenberg (2009) argues that the degradation of pathophysiology in evidence hierarchies is unwarranted “as pathophysiology often provides more fundamental understanding of causation and is in no way scientifically inferior” (p. 180).
La Caze (2011) voices alarm that evidence hierarchies overlook the contributions of basic science. As pointed out above, basic science is usually assigned to the lower tiers of evidence hierarchies. While the assignments to the different tiers are rationalized based on reference to “quality” in fact, “proponents of EBM provide little justification for placing basic science so low in EBM’s hierarchy” (La Caze, 2011, p. 96). “Proponents of EBM urge clinicians to base decisions on the outcomes of large randomised studies rather than the mechanistic understanding of pharmacology and physiology provided by basic science” (La Caze, 2011, p. 83).
While it is true that leaders of the EBM movement such as Sackett et al. (1996) mentioned integrating the totality of evidence in early statements on EBM, in practice, evidence hierarchies have become a sorting mechanism for what studies to read (RCTs) and what evidence to ignore (everything else). Contrary to holistic approaches to medicine that recommend evaluating the totality of evidence, in EBM, “evidence from randomised studies is taken to trump evidence from lower down the hierarchy, including evidence from basic medical science” (La Caze, 2011, p. 84):
La Caze (2011), in his defense of basic science, draws attention to an issue that will be explored in greater depth below — the problem of inference from a sample population to a particular patient (La Caze refers to this as the problem of “external validity” and Upshur (2005) below refers to it as the “inferential gap”).
EBM’s account of medical evidence fails to recognise how interrelated the mechanisms of basic science and applied clinical research are. EBM is wrong to treat the different kinds of medical evidence as discrete. The problems of EBM’s account of medical evidence become especially clear when judging the relevance of clinical studies for individual patients. This is the problem of ‘external validity’. External validity is the extent to which the results of a study can be generalised to patients outside of the study, it is usefully contrasted with internal validity. Despite being well recognised, there is precious little in the EBM literature on how the problem of external validity might be overcome. This is because any reply to the problem of external validity relies on interpreting clinical research in light of basic science. EBM is left short because it lacks an account of the relation between basic science and clinical research (La Caze, 2011, p. 89).
The various branches of science and medicine usually work together as an interwoven system so it is strange for EBM to privilege one strand of the system over all others.
Theory, experiment and data are all linked; the basic science that specified, and helps to assess, the models of the experiment are part of the results of applied clinical research (La Caze, 2011, p. 94)…. Clinical research may refute basic science, but more often it refines and improves the understanding of how the mechanisms described in basic science are realised in clinical care. Just as basic science alone fails to predict patient outcomes, the statistical findings of clinical research alone fails to give direction on how the results can be applied appropriately. Rather than view basic science and the statistical findings of applied clinical research separately, considerably more progress can be made by recognizing the connections between these sources of evidence (p. 96).
The denigration of bench science is yet another example of how EBM overlooks systems while privileging certain parts and certain actors.
2. Evidence and interpretation are two different things
Upshur and Tracy (2004) point out that evidence itself does not indicate what should be done; the interpretation of evidence is key. But EBM elides this distinction between evidence and interpretation and implies that proper evidence (in their view, RCTs) is dispositive. In the process they smuggle in, without debate, a deterministic philosophy, which runs counter to actual medical practice. Upshur and Tracy (2004) take pains to correct the deterministic view of evidence that has emerged via EBM:
Evidence has distinct properties which are important to note. Evidence derived from clinical studies is provisional, defeasible, emergent, incomplete, constrained (by ethical, economic, and computational forces), collective in nature, asymmetrically distributed across help disciplines, historically limited, and influenced by markets (Upshur, 2000) (Upshur and Tracy, 2004, p. 201).
But actually applying the evidence requires a different set of skills:
The art of the practice of medicine is to be learned only by experience. It is not an inheritance. It cannot be revealed. Learn to see, learn to hear, learn to feel, learn to smell, and know that by practice alone you can become expert (Osler, 1968, cited in Upshur and Tracy, 2004, p. 202).
Upshur’s views on evidence also show up in this correspondence with Gupta (2003):
…Upshur (personal communication) states that evidence itself does not constitute truth; rather, evidence plays a role in determining what is believed to be true. He points to the legal notion of evidence as a comparison. In a court case, ‘evidence’ is used to support various theories of what actually happened during a crime. One of these theories is ‘discovered’, on the basis of the available evidence, to be most likely to be true. The selection of evidence to support conclusions is negotiated and debated and is affected by social and other forces such as power, coercion, and self-interest of one negotiator, or group of negotiators, vis-à-vis another. These forces may have an impact on which conclusions or theories are ultimately selected as most likely to be true (Gupta, 2003, p. 116).
Gupta (2003) continues:
[E]vidence is a status conferred upon a fact, reflecting, at least in part, a subjective and social judgment that the fact increases the likelihood of a given conclusion being true. For any given set of phenomena, there may be many available facts that could count as evidence for more than one conclusion or theory. However, only some facts will be deemed as evidence for one successful conclusion or theory, which itself is chosen from among several options. Thus, evidence is not, as EBM implies, simply research data or facts but series of interpretations that serve a variety of social and philosophical agendas (p. 116).
If one is looking to solve complex problems in medicine, the relationship between evidence and interpretation matters enormously. If one is just looking to sell profitable drugs, that relationship is not as important. The fact that EBM has not adequately addressed the fundamental distinction between evidence and interpretation is troubling indeed.
3. The inferential gap may be unbridgeable
Upshur (2005) points out that the sample population used in trials is often quite different from the actual population that uses a particular treatment. Doctors are expected to extrapolate from a sample population to their particular patient — but Upshur (2005) argues that such deduction (sometimes also referred to as extrapolation or inference) is more problematic than it would appear. He writes:
Clinical research evidence in the form of RCTs and meta-analyses provides at best a provisional warrant — that is, drug X may work, not drug X will work. The probability of successful treatment with the assortment of agents available varies dramatically; there is a wide range of ways of framing these benefits, but there is no such thing as a treatment that works every time. Consequently, there is nothing in any way directive about such evidence and nothing inevitable about a p value or confidence interval: the evidence does not tell a physician or a patient what to do and has no compelling epistemic or moral force (p. 483).
Upshur (2005) shows that medicine faces an irreducible problem in that average outcomes in RCTs do not indicate what treatment is appropriate for the individual patient. But EBM as currently constructed ignores this “inferential gap.” He writes:
[M]eta-analysis and RCTs use statistical techniques to calculate outcome measures that are average values distributed across populations. There seems to be an irreducible problem around the heterogeneity of treatment effects (Kravitz, Duan, and Braslow 2004)…. [H]aving an average-value outcome in no way directs what one ought to do for any individual patient. Knowing that, on average, drug X is better than placebo for condition Y, does not tell you that drug X is going to work for a particular patient with condition Y. This still leaves a possibility for misapplication. One could believe that drug X is the appropriate manoeuvre for patient Z for condition Y, only to find that, in actuality, it has either no effect, or even harmful effects (p. 488).
This inferential gap is unlikely to ever be bridged because there is infinite variety in the human population so responses to medical interventions will always vary as well. Bayesian statistics might help narrow the gap a bit (see next section) as it allows one to continually refine estimates as new evidence becomes available (conditional probabilities that affect the prior probability of the hypothesis). But even with Bayesian statistics, the best one can come up with are probabilities, not the deterministic thinking of EBM. These are extraordinary debates at the core of the philosophy of medicine — and it is exactly these sorts of debates that EBM proponents circumvent in making RCTs the sole tool for clinical decisions.
4. Bayesian statistics has long since proven superior to the frequentist approach relied upon by RCTs
Worral (2002) is withering in his critique of the over-reliance on RCTs within evidence-based medicine. The picture he paints is a battle between frequentist statisticians and Bayesian statisticians (and philosophers) over the epistemological basis of EBM. He notes that frequentist statisticians have elevated RCTs to the top of the evidence hierarchy as a sort of panacea for overcoming research bias but that such a ranking is not warranted by the evidence.
Worrall (2002) writes that three arguments have traditionally been used in favor of randomization:
-
“The Fisherian Argument from the Logic of Significance testing” — namely that frequentist tests of significance, by definition, require randomization in order to be valid, “so that any given individual in the trial had the same probability of landing in either group” (p. 321);
-
The belief that “randomization controls for all variables known and unknown” (p. 321); and
-
Randomization controls for selection bias (p. 324).
Worrall (2002) makes the case that none of these arguments withstands close examination.
Ronald Fisher was an English statistician whose insights helped to create modern statistical science (Hald, 1998). Fisher argued that randomisation was the only means by which “the validity of the test of significance can be guaranteed” (1947, in Worrall, 2002, p. 321).
Worrall (2002) responds to this line of reasoning by writing,
-
“[F]irst… it is not in fact clear that the argument is convincing even on its own terms [citing Lindley, 1982, and Howson and Urbach, 1993];”
-
“secondly… there are, of course, many — not all of them convinced Bayesians — who regard the whole of classical significance-testing as having no epistemic validity, and hence who would not be persuaded of the need for randomisation even if it had been convincingly shown that the justification for a significance test presupposes randomization” (p. 321).
-
He continues (citing Dennis Lindley, 1982) that “there are indefinitely many possible confounding factors” so it is in fact, not possible to control for all variables known and unknown as frequentists claim (p. 324).
-
Furthermore, while blinding of the clinician via randomization does help control for selection bias, it is simply one of many ways to achieve this methodological goal (p. 325).
In 2008, Michael Rawlins gave the annual Harveian Oration at the Royal College of Physicians of London where he challenged many tenets of EBM. He stated his view that, “Decisions about the use of therapeutic interventions, whether for individuals or entire healthcare systems, should be based on the totality of available evidence. The notion that evidence can be reliably or usefully placed in ‘hierarchies’ is illusory” (Rawlins, 2008, p. 579). It was a direct challenge to a healthcare system increasingly designed around the use of EBM and evidence hierarchies. As part of his address, he also politely sided with the Bayesians over the frequentists:
A growing number of statisticians (Ashby, 2006) believe that the solution to many of the difficulties inherent in the frequentist approach to the design, analysis and interpretation of RCTs is the greater use of Bayesian statistics. This notion of probability — subjective or inverse probability — is the likelihood of a hypothesis given some data. Thus, while the frequentist approach is about the probability of some data conditional on a specific hypothesis (usually the null hypothesis), the Bayesian approach is the reverse (i.e. the probability of a hypothesis conditional on the data) (p. 581).
But he notes that “regulatory authorities have sometimes been hesitant to concede that Bayesian approaches may have advantages” (Berry et al. 2005, in Rawlins, 2008, p. 582).
5. “Science can never prove hypotheses, only refute them…” (The Popperians vs. EBM)
Eyal Shahar (1997) similarly challenges the epistemic basis of EBM — but from a Popperian perspective. Shahar, a doctor and epidemiologist, sees EBM as an end run around complicated epistemological issues that some scientists would rather not discuss. He writes: “‘evidence-based medicine’ is at best a meaningless substitute for ‘medicine’ and, at worst, a disguise for a new version of authoritarianism in medical practice” (Shahar, 1997, p. 110).
He continues:
[I]t is quite simple to argue logically against the use of the term ‘evidence-based medicine’, if evidence means biomedical science. At least one school of logical thought submits that scientific work can never prove or even ‘nearly prove’ scientific hypotheses but can only, in principle, falsify them (Popper 1968; Agassi 1975; Miller 1982). Scientific hypotheses — and medical hypotheses are no exception — are forever conjectures about the truth. They might be conjectures that have survived many tests and have attracted a large crowd of believers, but that does not change their permanent conjectural status (Shahar, 1997, p. 110).
For Popperians, the problem with EBM goes beyond the general and technical problems noted by others. Rather the problem is that the inductive method relied upon by EBM (inferring from a clinical trial to a particular patient) is not a valid methodology. Shahar (1997) writes,
Inductive procedures — that is, inferring from the observed to the unobserved — are always illogical (Popper 1968; Popper & Miller 1987) and they are just as illogical in the case of a clinical trial…. (1) no number of successful trials provides logical support to the theory that treatment A is always superior to placebo and (2) no number of ‘negative’ trials provides logical support to the theory that treatment A is never superior to placebo. As Popper and others have relentlessly argued: inductive logic does not exist. It is impossible to construct a system of inductive logic (Miller 1982) (p. 111).
While Upshur (2005) was troubled by EBM’s leaps across the inferential gap, Shahar (1997) goes further by arguing that this gap can never be closed completely. Popperians similarly reject the frequentist assumptions that underlie RCTs:
Statistical hypothesis testing, and especially the concept of ‘statistical significance’, have been subject to devastating criticism with almost no rebuttal (Rothman 1986a; Gardner & Altman 1986; Poole 1987; Goodman & Royall 1988; Oakes 1990; Schervish 1996). If anyone insists on rules for statistical interpretation of the results of a test (e.g. P < 0.05), he [sic] should be reminded that there are no such logical rules — neither in physics nor in clinical research (Shahar, 1997, p. 111-112).
Shahar (1997) argues that given a heterogeneous population, personalized medicine is the only logically justified approach to evidence:
The best empirical experience, at least in the case of chronic stable medical conditions, should be provided by a randomized, double-blind, cross-over trial that includes only one patient: the patient in question (Guyatt et al. 1986). No literature search for evidence is superior to an ‘n-of-1 trial’ whenever feasible. Interestingly, evidence-based medicine is a poor product of several of the scientists that should be praised for introducing the n-of-1 trial methodology into clinical practice in the 1980s [most notably Guyatt]. Why contributors to both themes failed to realize that literature-based evidence and the n-of-1 trial methodology do not thrive together is an enigma to me (p. 114).
For Shahar (1997) EBM is an illegitimate attempt to elide the uncomfortable realities of the uncertainty that comes with medical practice:
One may ask how doctors are to make medical decisions in the presence of permanent uncertainty. The answer is very simple: on the basis of some interpretation of empirical experience — a subjective exercise with no universally accepted logical rules (p. 115).
Shahar (1997) concludes by writing:
Whenever someone waves the flag of evidence-based medicine in your face, demand a straightforward answer to the following question: whose evidence is the evidence in evidence-based medicine? (p. 116)
Based on the evidence presented in Chapter 5 of my doctoral thesis we already have the answer to Shahar’s question. Practically speaking, the evidence generated by pharmaceutical companies through their contracts with overseas (usually Chinese) CROs, written up by ghost writers employed by pharmaceutical companies, and published in scientific journals that often have their own financial conflicts of interest, is the evidence that EBM tells doctors to rely upon. EBM is a corporate takeover of medicine by stealth with only a handful of critics raising questions about its troublesome context and implications.
6. The Pragmatists vs. EBM
The pragmatic school in the philosophy of medicine also takes exception to what they call the objectivist ontology of EBM. Pragmatism is defined as “a philosophy emphasizing practical applications and consequences of beliefs and theories, that the meaning of ideas or things is determined by the testability of the idea in real life” (pragmatism, n.d.). Objectivism is defined as “one of several doctrines holding that all reality is objective and external to the mind and that knowledge is reliably based on observed objects and events” (objectivism, n.d.).
The irony is that EBM sees itself as a pragmatic movement. But Goldenberg (2009) argues that EBM is actually an objectivist philosophy.
As DeVries and Lemmens (2006) argue, “evidence” is a social product, influenced by the variable power and authority held by different stakeholders (patients, medical researchers, hospital administrators, clinicians, policy makers, etc.) in producing and determining the parameters for what counts as evidence. The displacement of these normative considerations in favor of technical and methodological considerations like the criteria of best evidence or scientific rigor is regarded as ethically suspect (Goldenberg 2005). While evidence-based approaches are concerned with finding the best evidence (according to their predefined standards) to answer research and treatment questions, the critics ask the challenging question: whose evidence is setting the standard of best practice (Harari 2001; Shahar 1997; Stewart 2001;Walsh 1996; Witkin and Harrison 2001)? (Goldenberg, 2009, p. 170).
Goldenberg (2009) argues that EBM’s objectivist tendencies make it ill-suited to the demands of day-to-day medicine.
[T]he term objectivity carries considerable epistemic weight in science and other knowledge-pursuing activities. It has been described allegorically as a “figure cast in stone, standing in our cultural pantheon among symbols of divine knowledge” (Burnett 2008). Objectivity’s typical association with such equally powerful concepts as reality, truth, and reliability further emphasizes its cognitive might. Yet this objectivist ontology, where the evidence “speaks” and reliable knowledge follows, presents an occupational hazard to (actual) medical practice. Subjective content muddies up even the most rigorous evidence-based practice by the inescapable layers of interpretation and sociocultural influence that enter in the setting of research agendas (including what projects gets funded and why), the production of evidence in primary research, and the selection of which evidence is chosen to inform policy and practice (Goldenberg, 2009, p. 170).
Goldenberg describes a certain paradox to EBM — on the one hand its proponents appeal to a certain pure standard of objectivity (via RCTs) while on the other hand ignoring evidence that RCTs are not as objective as they seem.
EBM’s rigid and rule-based hierarchy of evidence stands in contrast to the open-ended and ad hoc style of pragmatic scientific inquiry. The hierarchy’s ranking has been explained by the EBM originators as being based on levels of certainty (Sackett et al. 1991). It stands as EBM’s point of departure from pragmatic science to a more objectivist epistemology, as the RCT’s gold standard status will be shown to be problematically upheld by various abstract commitments to the universal rigor and applicability of randomized trial methods that are not substantiated in the actual practices of health research. Instead, different health research studies call for different designs and so there is no gold standard methodology (Goldenberg, 2009, p. 174).
One of the things I find so troublesome about EBM is not its positivism or objectivism per se, but rather that it displays a certain corporate positivism and corporate objectivism. What I mean is that in EBM, (mostly) corporate-derived data is granted exclusive privileges for decision making in spite of evidence suggesting it is of low quality, while other valid but often non-corporate methods, such as observational studies or registries, are dismissed outright.
Goldenberg argues that the “absolutist search for certainty can explain the appeal and rapid uptake of EBM” (p. 181).
Paul Feyerabend (1978) has described science as being obsessed with its own mythology of objectivity and universality, while in medicine, Katherine Montgomery (2006) has argued that medicine mis-specifies itself as a science, with an image of science that is antiquated and that does justice to neither medicine nor science. Science has also been described, again by Feyerabend among others, as a repressing ideology that started as a liberating movement. EBM reinforces these images, to a certain extent, with its objectivist account of scientific medicine and rigid hierarchy of evidence. If the hierarchy of evidence was put in place to refute skepticism and ensure certainty, it stands as an example of what Feyerabend abhorred: science making claims to truth well beyond its actual capacity. Science, the critics insist, cannot fulfill this epistemic quest for certainty. Science is at best — and is at its best — when it is recognized to be democratic, ad hoc, and fallible (Goldenberg, 2009, p. 182).
It is not that science and medicine could never be tools for liberation. It is that actually existing science and medicine in the U.S., for the most part, are monopoly capitalist science and medicine that elevate profits over the well-being of people, which puts them in conflict with their own stated methods and principles.
7. Medicine as science vs. medicine as practice
In How Doctors Think, Kathryn Montgomery (2006) argues that medicine is neither a science nor an art but a social science — specifically the development of practical reasoning which Aristotle called phronesis. She writes,
The assumption that medicine is a science — a positivist what-you-see-is-what-there-is representation of the physical world — passes almost unexamined by physicians, patients, and society as a whole. The costs are great. It has led to a harsh, often brutal education, unnecessarily impersonal clinical practice, dissatisfied patients, and disheartened physicians (citing Engel 1977, in Montgomery, 2006, p. 6).
But medicine as science just does not fit the evidence of how doctors actually do their work according to Montgomery (2006).
No matter how solid the science or how precise the technology that physicians use, clinical medicine remains an interpretive practice. Medicine’s success relies on the physicians’ capacity for clinical judgment. It is neither a science nor a technical skill (although it puts both to use) but the ability to work out how general rules — scientific principles, clinical guidelines — apply to one particular patient. This is — to use Aristotle’s word — phronesis, or practical reasoning (Montgomery, 2006, p. 5).
Montgomery (2006) calls medicine a “practice” and proceeds to reintroduce ancient wisdom into the conversation. She writes,
What is neglected by the science-art duality is medicine’s character as a practice…. Aristotle describes phronesis in the Nicomachean Ethics as the intellectual capacity or virtue that belongs to practical endeavors rather than to science. Although twenty-first-century beneficiaries of science are not much used to thinking of different kinds of rationality, phronesis or practical reasoning is nevertheless a valuable, even a familiar concept. As an interpretive, making-sense-of-things way of knowing, practical rationality takes account of context, unpredicted but potentially significant variables, and, especially, the process of change over time (p. 33).
If medicine at its best is properly thought of as a social and philosophical practice, EBM as currently taught interrupts that practice. EBM fixes medical practice to a frequentist, corporate ontology. By design, EBM quite explicitly turns off the multiple, conflicting, ever-changing ways of knowing and replaces them with a single corporatized channel, RCTs.
Conclusion
As with any successful marketing program, the words themselves are unobjectionable and pleasing: evidence-based medicine. But the actual program behind Evidenced-Based Medicine™ as practiced throughout the developed world over the last thirty years is corporate, captured, runs roughshod over essential debates in the philosophy of medicine, and promotes deadly junk science as the gold standard of science. Think about this: over the last three decades, the biggest blockbuster drugs include vaccines, SSRIs and other psychopharmaceuticals, and statins. All of them were licensed using EBM rubrics. And yet none of them have been shown to have more benefits than harms in the real world. EBM has turned allopathic medicine into a Potemkin Village — a pretty façade with almost nothing of substance behind it.
The character arc of EBM is like reading a Greek tragedy or the Old Testament. A bunch of smart, seemingly well-intentioned people organized themselves to take over the practice of medicine. They wanted to make it better. It went well for a while but then hubris, greed, power, and corruption took over. Epidemiologists became a new priestly class and replaced science with dogmatism. Once unleashed, EBM became a runaway freight train. Now it is actively harming patients and destroying allopathic medicine in the name of saving it.
We need not sacrifice our dignity, common sense, and rational faculties on the altar of EBM as Guyatt and others have done. Rigged RCTs are not evidence. Corporate science is not science. We need to return to the old ways. We must let doctors be doctors again, relying on evidence, experience, and intuition — phronesis as Aristotle taught us (and Kathryn Montgomery reminds us). And we must let parents be parents again. Personal sovereignty and responsibility are the foundation of medicine and society. No financially conflicted epidemiologist in an ivory tower thousands of miles away (or heaven help us, Washington D.C.) knows what’s best for a person. The era of corporate EBM is over and the future of medicine is decentralized, N-of-1, non-corporate, non-government, person-to-person, direct primary care, based on the totality of evidence, decency, life experience, and personal values.
REFERENCES
basic science. (n.d.). https://www.merriam-webster.com/dictionary/basic science
bench research. (n.d.). https://web.archive.org/web/20181209194911/http://medical-dictionary.thefreedictionary.com/bench+research
Bluhm, R. (2005). From Hierarchy to Network: a richer view of evidence for evidence-based medicine. Perspectives in Biology and Medicine, 48(4), 535-547. https://sci-hub.se/10.1353/pbm.2005.0082
objectivism. (n.d.) Farlex Partner Medical Dictionary. (2012). https://www.thefreedictionary.com/objectivism
Goldenberg, M. J. (2009, Spring). Iconoclast or Creed?: Objectivism, Pragmatism, and the Hierarchy of Evidence. Perspectives in Biology and Medicine, 52(2). https://sci-hub.se/10.1353/pbm.0.0080
Gupta, M. (2003). A critical appraisal of evidence‐based medicine: some ethical considerations. Journal of Evaluation in Clinical Practice, 9(2), 111–121. https://sci-hub.se/https://doi.org/10.1046/j.1365-2753.2003.00382.x
La Caze, A. (2011). The role of basic science in evidence-based medicine. Biology and Philosophy, 26(1), 81-98.
https://sci-hub.se/https://link.springer.com/article/10.1007/s10539-010-9231-5
Montgomery, K. (2006). How doctors think: Clinical judgment and the practice of medicine. Oxford University Press. https://global.oup.com/academic/product/how-doctors-think-9780195187120
pragmatism. (n.d.) Farlex Partner Medical Dictionary. (2012). http://medical-dictionary.thefreedictionary.com/pragmatism
Rawlins, M. (2008, December). De Testimonio: on the evidence for decisions about the use of therapeutic interventions. Clinical Medicine, 8(6). http://doi.org/10.7861/clinmedicine.8-6-579
Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996, January 13). Evidence based medicine: What it is and what it isn’t. British Medical Journal, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71
Shahar, E. (1997). A Popperian perspective of the term ‘evidence-based medicine’. Journal of Evaluation in Clinical Practice, 3, 109-116. https://sci-hub.se/10.1046/j.1365-2753.1997.00092.x
Upshur, R. E. G. and Tracy, C. S. (2004, Fall). Legitimacy, Authority, and Hierarchy: Critical Challenges for Evidence-Based Medicine. Brief Treatment and Crisis Intervention, 4(3), 197-204. http://doi.org/10.1093/brief-treatment/mhh018
Upshur, R. E. G. (2005, Autumn). Looking for rules in a world of exceptions: reflections on evidence-based practice. Perspectives in Biology and Medicine, 48(4), 477-489. https://sci-hub.se/10.1353/pbm.2005.0098
Worrall, J. (2002, September). What Evidence in Evidence-Based Medicine? Philosophy of Science, 69(S3), S316-S330. https://sci-hub.se/http://doi.org/10.1086/341855
Blessings to the warriors. 🙌
Prayers for everyone fighting to stop the iatrogenocide. 🙏
Huzzah for everyone building the parallel society our hearts know is possible. ✊
In the comments, please let me know what’s on your mind.
As always, I welcome any corrections.
Click this link for the original source of this article.
Author: Toby Rogers
This content is courtesy of, and owned and copyrighted by, https://tobyrogers.substack.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.