In his recent book “The Drunkard's Walk: How Randomness Rules our Lives,”1 Leonard Mlodinow describes how humans are notoriously bad at, and often even averse to, the straightforward use of data and probability in making daily judgments. This characteristic is not restricted to certain educational levels, sexes, or professions. Despite its image of being scientifically based, the actual application of evidence in medicine is, like a drunkard's walk, quite haphazard and inconsistent. Social scientists have long documented that new medical products and practices disseminate into health care more because of power and money than scientific evidence.2 In more than 3 decades since the formal development and teaching of “evidence-based medicine” (EBM), the amount of evidence routinely incorporated into various practice types (complementary or conventional) and settings varies wildly. In 1991, it was estimated that approximately 15% of medical interventions were supported by solid scientific evidence.3 A more recent summary of the percentage of decisions in various medical specialties that follow the rules of EBM ranges from 11% to 70%4; These are hardly ringing endorsements of medicine as science. Subspecialists and inpatient practices tend to be better grounded in evidence, possibly because they have a more narrow focus and the nature of the “best” evidence comes from patients with more homogeneous problems entered into randomized controlled trials (RCTs) rather than the more complex patients seen in general practice.5
Jonas WB. Scientific Evidence and Medical Practice: The “Drunkard's Walk”. Arch Intern Med. 2009;169(7):649–650. doi:10.1001/archinternmed.2009.4
Artificial Intelligence Resource Center
Customize your JAMA Network experience by selecting one or more topics from the list below.