Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Nonsense on Stilts: How to Tell Science from Bunk
Nonsense on Stilts: How to Tell Science from Bunk
Nonsense on Stilts: How to Tell Science from Bunk
Ebook504 pages9 hours

Nonsense on Stilts: How to Tell Science from Bunk

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

“This crash course in critical thinking . . . includes handy rules for evaluating the confused public discourse on climate change, evolution, and even UFOs.” —Discover

Recent polls suggest that fewer than forty percent of Americans believe in Darwin’s theory of evolution, despite it being one of science’s best-established findings. Parents still refuse to vaccinate their children for fear it causes autism, though this link has been consistently disproved. And about forty percent of Americans believe that the threat of global warming is exaggerated, including many political leaders.

In this era of fake news and alternative facts, there is more bunk than ever. But why do people believe in it? And what causes them to embrace such pseudoscientific beliefs and practices? In this fully revised second edition, noted skeptic Massimo Pigliucci sets out to separate the fact from the fantasy in an entertaining exploration of the nature of science, the borderlands of fringe science, and—borrowing a famous phrase from philosopher Jeremy Bentham—the nonsense on stilts. Presenting case studies on a number of controversial topics, Pigliucci cuts through the ambiguity surrounding science to look more closely at how science is conducted, how it is disseminated, how it is interpreted, and what it means to our society. The result is in many ways a “taxonomy of bunk” that explores the intersection of science and culture at large.

Broad in scope and implication, Nonsense on Stilts is a captivating guide for the intelligent citizen who wishes to make up her own mind while navigating the perilous debates that will shape the future of our planet.

“Brilliant . . . required reading for, well, everyone.” —New Scientist
LanguageEnglish
Release dateOct 5, 2018
ISBN9780226496047
Nonsense on Stilts: How to Tell Science from Bunk
Author

Massimo Pigliucci

Massimo Pigliucci is K. D. Irani Professor of Philosophy at the City College of New York and the author of, among other books, How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life (2017).

Read more from Massimo Pigliucci

Related to Nonsense on Stilts

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Nonsense on Stilts

Rating: 3.8333333333333335 out of 5 stars
4/5

27 ratings4 reviews

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 3 out of 5 stars
    3/5
    With respect to the subtitle of how to tell science from bunk, Pigliucci sadly fails to make his case. Sure, there are extensive side trips into differentiation between examples of science and pseudoscience (i.e., evolution vs creationism/ID), but only to illustrate a very specific point rather than a general approach, and he really doesn't address the "how" - for the most part he merely contrasts and states one is science and the other is not. Too easy for proponents of bunk to pick apart (and too easy for proponents of intelligent reason and science to pick apart.)

    There is one rambling chapter on logic, philosophy, and the history of science that does nothing to achieve the subtitle. Description of inductive vs deductive schools of thought (a waste of a chapter focusing on philosophical BS) without application is rather silly. This is not to say that Pigluicci is not right, rather, the knowledge conveyed is useless.

    A common theme, he offers no real tools to apply to generic instances. This is a collection of sometimes interesting, if verbose, observations about specific contrasts. And a standard pattern of this book is a disjointed soft segue into something purportedly connected to the preceding chapter, followed by an Emeril-ish BAM! of a not quite sequitur sidebar.

    One of the better questions posed was whether one needed to be an expert in both science and pseudoscience to be able to tell the difference, but as per the previous chapters, Pigliucci quickly diverged to explaining what constituted an "expert" rather than answer the question. While claiming to provide a way to tell science from bunk in the last chapter, he actually is giving a way to tell bunk "experts" from possible scientific experts. That has value. But not how he thinks.

    Mostly unsatisfying, and sadly so. But he gets an extra star for skewering and roasting Deepak Chopra. That was fun. If short.

    2 people found this helpful

  • Rating: 4 out of 5 stars
    4/5
    As the title indicates, the main focus of this is supposedly on the "demarcation problem," that is, how to draw the line between valid science and pseudoscience. I'm not entirely sure how well it achieves that goal, as a lot of what Pigliucci discusses doesn't seem to me to work towards answering that question in any clear and immediate way, although he does tie it all together reasonably well, if very briefly, in his concluding chapter. I also wouldn't say that this is the book I'd recommend for an in-depth discussion of pseudoscience and its characteristics. Most of his specific coverage of that topic consists of some very cursory case studies of individual psuedoscientific beliefs, although his chapter on the famous Dover case (in which a school board attempted to mandate the teaching of creationism in the guise of "intelligent design") is good. (He also has a decent, if perhaps slightly dated, discussion of global warming, although I'm not entirely sure about his choice to, effectively, present his arguments about the subject in the form of a pair of book reviews.)Where this book is really worthwhile, though, is in the chapters on the philosophy of science -- unsurprisingly, perhaps, considering that this is Pigliucci's main field of interest. These chapters include a thought-provoking examination of the difference between "hard" and "soft" sciences, a history of scientific thinking from the time of the ancient Greeks, and a critique of the extreme postmodernist view of science as having nothing to do with objective reality. I found these chapters fascinating, and very much appreciated Pigliucci's nuanced and thoughtful approach in discussing the powers and the limitations of science as a human endeavor.Rating: Despite some unevenness, I'm giving this one a 4/5. If this is a topic you're interested in, the worthwhile parts are very worthwhile.
  • Rating: 4 out of 5 stars
    4/5
    This book is relevant to everything going on in society today, especially as concerns the public perception and interpretation of scientific news and advances. Today you don't have to look hard to find someone peddling some new miracle cure, or claiming that they've discovered some new secret to the universe. And much of this is presented right along with the latest in scientific discoveries on your favorite news channel. But how do you tell the difference between what's really true, and what's utter bunk? That's what this book is all about; it lays out the different criteria one can use to determine what is science, and what is pseudo-science, or even out right nonsense (with or without stilts).The book is broken up into several sections starting with an examination of the differences between science, fringe science, and pseudo-science. It then examines the discussion of science within the media and how the outreach of some scientists to the media has fared. It further explores the history of science and the relationship it has with philosophy, as well as the future need for the philosophy of science. It ends the book with a look at what criteria we the public can use when trying to determine someone's expertise in a particular area of science in order to determine who we should trust when evaluating claims.In general the book's writing style was engaging and keeps you moving along, as opposed to other similar books that tend to be a bit dry and sometimes difficult to get through.; there is even a bit of wit and humor thrown in from time to time to keeps things light. The topics are handled without the need for a heavy knowledge of science and I found it to be highly accessible for the general public. If you find yourself struggling to determine which claims are correct, and which are not, among the many news venues available, this book is a good remedy for that.
  • Rating: 5 out of 5 stars
    5/5
    With such a great title, I sort of expected this book to disappoint. It didn't. The author delivers a solid critique of pseudoscience and irrational thinking, explaining in the process how science is done and giving a good argument for philosophy and history, as well. There are a couple of week spots, particularly in the chapter on scientism where he makes some pronouncements without any real evidence, but for the most part, it is well thought out, well researched, and well written. It should be a vital part of the library of evrey thinking person (and is needed even more in the libraries of those persons who tend not to think too much).

Book preview

Nonsense on Stilts - Massimo Pigliucci

Nonsense on Stilts

Nonsense on Stilts

How to Tell Science from Bunk

Second Edition

MASSIMO PIGLIUCCI

THE UNIVERSITY OF CHICAGO PRESS

CHICAGO AND LONDON

The University of Chicago Press, Chicago 60637

The University of Chicago Press, Ltd., London

© 2010, 2018 by The University of Chicago

All rights reserved. Originally published 2010

Second Edition 2018

Printed in the United States of America

27 26 25 24 23 22 21 20 19 18    1 2 3 4 5 6

ISBN-13: 978-0-226-49599-6 (paper)

ISBN-13: 978-0-226-49604-7 (e-book)

DOI: https://doi.org/10.7208/chicago/9780226496047.001.0001

Library of Congress Cataloging-in-Publication Data

Names: Pigliucci, Massimo, 1964– author.

Title: Nonsense on stilts : how to tell science from bunk / Massimo Pigliucci.

Description: Second edition. | Chicago : The University of Chicago Press, 2018. | Includes bibliographical references and index.

Identifiers: LCCN 2018014122 | ISBN 9780226495996 (pbk. : alk. paper) | ISBN 9780226496047 (e-book)

Subjects: LCSH: Pseudoscience—History. | Science—History.

Classification: LCC QI72.5.P77 P54 2018 | DDC 001.9—dc23

LC record available at https://lccn.loc.gov/2018014122

This paper meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper).

Contents

Introduction   Science versus Pseudoscience and the Demarcation Problem

CHAPTER 1   Frustrating Conversations

CHAPTER 2   Hard Science, Soft Science

CHAPTER 3   Almost Science

CHAPTER 4   Pseudoscience

CHAPTER 5   Blame the Media?

CHAPTER 6   Debates on Science: The Rise of Think Tanks and the Decline of Public Intellectuals

CHAPTER 7   From Superstition to Natural Philosophy

CHAPTER 8   From Natural Philosophy to Modern Science

CHAPTER 9   The Science Wars I: Do We Trust Science Too Much?

CHAPTER 10   The Science Wars II: Do We Trust Science Too Little?

CHAPTER 11   The Problem and the (Possible) Cure: Scientism and Virtue Epistemology

CHAPTER 12   Who’s Your Expert?

Conclusion   So, What Is Science after All?

Notes

Index

INTRODUCTION

Science versus Pseudoscience and the Demarcation Problem

The foundation of morality is to . . . give up pretending to believe that for which there is no evidence, and repeating unintelligible propositions about this beyond the possibilities of knowledge. So wrote Thomas Henry Huxley, who thought—in the tradition of writers and philosophers like David Hume and Thomas Paine—that we have a moral duty to distinguish sense from nonsense. That is also why I wrote this book. Accepting pseudoscientific untruths, or conversely, rejecting scientific truths, has consequences for all of us, psychologically, financially, and in terms of quality of life. Indeed, as we shall see, pseudoscience can literally kill people.

The unstated assumption behind Huxley’s position is that we can tell the difference between sense and nonsense, and in the specific case that concerns us here, between good science and pseudoscience. As it turns out, this is not an easy task. Doing so requires an understanding of the nature and limits of science, of logical fallacies, of the psychology of belief, and even of politics and sociology. A search for such understanding constitutes the bulk of this book, but the journey should not end with its last page. Rather, I hope readers will use the chapters that follow as a springboard toward even more readings and discussions, to form a habit of questioning with an open mind and constantly demanding evidence for whatever assertion is being made by whatever self-professed authority—including, of course, yours truly.

The starting point for our quest is what mid-twentieth-century philosopher of science Karl Popper famously called the demarcation problem. Popper wanted to know what distinguishes science from nonscience, including, but not limited to, pseudoscience. He sought to arrive at the essence of what it means to do science by contrasting it with activities that do not belong there. Popper rightly believed that members of the public—not just scientists or philosophers—need to understand and appreciate that distinction, because science is too powerful and important, and pseudoscience too common and damaging, for an open society to afford ignorance on the matter.

Like Popper, I believe some insight into the problem might be gleaned by considering the differences between fields that are clearly scientific and others that just as clearly are not. We all know of perfectly good examples of science, say, physics or chemistry, and Popper identified some exemplary instances of pseudoscience with which to compare them. Two that he considered in some detail are Marxist theories of history and Freudian psychoanalysis. The former is based on the idea that a key to understanding history is the ongoing economic struggle between classes, while the latter is built around the claim that unconscious sex drives generate psychological syndromes in adult human beings.

The problem is, according to Popper, that those two ideas, rather paradoxically, explain things a little too well. Try as you may, there is no historical event that cannot be recast as the result of economic class struggle, and there is no psychological observation that cannot be interpreted as being caused by an unconscious obsession with sex. The two theories, in other words, are too broad, too flexible with respect to observations, to tell us anything interesting. If a theory purports to explain everything, then it is likely not explaining much at all. Popper claimed that theories like Freudianism and Marxism are unscientific because they are unfalsifiable. A theory that is falsifiable can be proven wrong by some conceivable observation or experiment. For instance, if I said that adult dogs are quadrupedal (four-legged) animals, one could evaluate my theory by observing dogs and taking notes on whether they are quadrupeds or bipeds (two-legged). The observations would settle the matter empirically. Either my statement is true or it isn’t, but it is a scientific statement—according to Popper—because it has the potential to be proven false by observations.

Conversely, and perhaps counterintuitively, Popper thought that scientific theories can never be conclusively proven because there is always the possibility that new observations—hitherto unknown data—will falsify them. For instance, I could observe thousands of four-legged dogs and grow increasingly confident that my theory is right. But then I could turn a corner and see an adult two-legged dog: there goes the theory, falsified by one negative result, regardless of how many positive confirmations I have recorded on my notepad. In this view of the difference between science and pseudoscience, then, science makes progress not by proving its theories right—because that’s impossible—but by eliminating an increasing number of wrong theories. Pseudoscience, however, does not make progress because its theories are so flexible. Pseudoscientific theories can accommodate any observation whatsoever, which means they do not actually have any explanatory teeth.

Most philosophers today would still agree with Popper that we can learn about what science is by contrasting it with pseudoscience (and nonscience), but his notion that falsificationism is the definitive way to separate the two has been severely criticized. A moment of reflection reveals that Popper’s solution is a bit too neat to work in the messiness of the real world. Consider again my example of the quadrupedal dogs. Suppose we do find a dog with only two legs, perhaps because of a congenital defect. Maybe our bipedal dog has even learned to move around by hopping on its two hind legs, like kangaroos do (in fact, such abnormal behavior has occasionally been observed in dogs and other normally quadrupedal mammals; just google bipedal dog to see some amusing examples). According to Popper, we would have to reject the original statement—that dogs are quadrupeds—as false. But as any biologist, and simple common sense, would tell you, we would be foolish to do so. Dogs are quadrupeds, exceptions notwithstanding. The new observation requires, not that we reject the theory, but that we slightly modify our original statement and say that dogs are normally quadrupedal animals, though occasionally one can find specimens with developmental abnormalities that make them look and behave like bipeds.

Similarly, philosophers agree that scientists do not (and should not) reject theories just because they initially fail to account for some observation. Rather, they keep working at them to see why the data do not fit and how the theory might be modified in a sensible manner. Perhaps the new observations are exceptional and do not really challenge the core of a theory, or maybe the instruments used by scientists (whether sophisticated telescopes or simple gauges) are giving incorrect readings and need to be fixed—many things may give rise to discrepancies. This idea of modifying statements to accord with new data sounds reasonable on the face of it—until we recognize the risk of slipping back into precisely the sort of nonscience Popper was worried about: if we can tweak our theories at will as new observations come in, we end up with something not really that different from Freudianism or Marxism.

So, on the one hand, a strict reliance on falsificationism entails the risk of throwing away the baby with the bathwater: too much good science would fail the falsification test. On the other hand, if we allow ourselves the luxury of changing theories to accommodate whatever new observations come our way, we are not doing science at all but, rather, engaging in a sterile exercise of rationalization. What are we to do? In this book, we will search for a sensible conception of science that will still allow us to tell the difference between it and bunk. In the process, we will see that not all sciences are cast from the same mold: some rely heavily on experiments, while others are more historical in nature and require the attitude and methods of a forensic detective. We will examine areas of inquiry in the borderlands between science and nonscience, then jump across that fuzzy boundary and look at claims the philosopher Jeremy Bentham would have labeled, in his famous phrase, nonsense on stilts, meaning a really, really tall order of nonsense.

Our voyage to seek that truth will lead us not just from one science to another but to broader discussions of how science is presented and misrepresented in the media, where science and society interact with profound effects on each other. We will briefly look at the tortuous intellectual history of science since its dim beginnings in ancient Greece, and we will examine issues surrounding the modern practice of science and the widespread acceptance of pseudoscience: what it means to be a scientist and intellectual in contemporary society; what critics of science have to say about its limits, and why they matter; and even the very idea of experts, in science or otherwise. This exploration will expose us to basic ideas in philosophy of science and to the tools of critical thinking, tools that will be useful in a variety of instances, safeguarding us against being duped by scientists, pseudoscientists, journalists, or politicians. We will see that there is no simple answer to Popper’s demarcation problem, but that there are answers nonetheless. Given the power and influence that science increasingly has in our daily lives, it is important that we as citizens of an open and democratic society learn to separate good science from bunk. This is not just a matter of intellectual curiosity, as it affects where large portions of our tax money go—and in some cases whether people’s lives are lost as a result of nonsense.

Before the fun begins, however, I need to thank many people without whom this book would have taken a lot longer to write and would not have been as good as I hope it is. Josh Banta, Oliver Bossdorf, Michael De Dora, Gillian Dunn, Mark Jonas, Phil Pollack, and Raphael Scholl graciously commented on drafts of several chapters, indubitably increasing their clarity and readability. My editor for the first edition, Christie Henry, was enthusiastic and highly supportive of the project from the beginning. A number of people inspired me, either through direct discussions or because their own writings have been influential on my thinking. In particular, Skeptical Inquirer editor Ken Frazier has stirred me to think like a positive skeptic, and my philosopher colleague Jonathan Kaplan has enormously augmented my understanding of issues in philosophy of science. It was reading astronomer Carl Sagan when I was a young student that instilled in me a passion both for science and for writing about it. And while we are talking about people I never met (and never will), the eighteenth-century philosopher David Hume is one of my role models, not just for the clarity and audacity of his thinking, but because he appreciated the good things in life well enough to deserve the nickname le Bon David from fellow French members of the Enlightenment. Countless, and often nameless, readers of my blog (platofootnote.org) have challenged both my ideas and the form in which I express them, which has been very useful indeed in writing this book. I am deeply thankful to them all.

CHAPTER ONE

Frustrating Conversations

Aristotle, so far as I know, was the first man to proclaim explicitly that man is a rational animal. His reason for this view was one which does not now seem very impressive; it was, that some people can do sums.

—Bertrand Russell, A History of Western Philosophy

The greatness of Reason is not measured by length or height, but by the resolves of the mind. Place then thy happiness in that wherein thou art equal to the Gods.

—Epictetus, Discourses

I am a professional scientist (an evolutionary biologist) as well as a philosopher (of science). You could say that I value rational discourse and believe it to be the only way forward open to humanity. That’s why I wrote this book. But I don’t want readers to be under the mistaken impression that one can simply explain, say, the difference between astronomy (a science) and astrology (a pseudoscience), pepper it with a few references to Karl Popper and the demarcation problem (see Introduction), and be done with it. Human beings may be rational animals, as Aristotle famously said, but often enough our rationality goes straight out the window, especially when it comes into conflict with cherished or psychologically comforting beliefs. Before we get into the nuts and bolts of science and pseudoscience, therefore, I’d like to recount a couple of conversations that will serve as cautionary tales. They may help us calibrate our expectations, dispelling the illusion that we can quickly and effectively dispatch nonsense and make the world a more reasonable place. That is the ultimate goal, but it is harder than one might think.

The first example recounts a somewhat surreal discussion I had with one of my relatives—let’s call him Ostinato—about pseudoscience (specifically, the nonexistent connection between vaccines and autism), conspiracy theories (about the 9/11 attacks on New York’s Twin Towers), politics, and much, much more. Of course, I should have known better than to start the discussion in the first place, especially with a relative who I knew subscribed to such notions. Blame it on the nice bottle of Aglianico wine we had been sharing during the evening.

The pattern of Ostinato’s arguments is all too familiar to me: he denied relevant expertise (you know, scientists often get it wrong!), while at the same time vigorously—and apparently oblivious to the patent contradiction—invoking someone else’s doubtful expertise (the guy is an engineer!). He continually side-tracked the conversation, bringing up irrelevant or unconnected points (an informal logical fallacy known as a red herring) and insisting we should look beyond logic, whatever that means. The usual fun. I was getting more and more frustrated, the wine was running out, and neither I nor Ostinato had learned anything or even hinted at changing our mind. Why was I not persuading him? There must have been something I was missing.

It was at that point that another of my relatives, observing the discussion and very much amused by it, hit the nail right on the head. He invited me to consider whether Ostinato was simply confusing probability with possibility. I stopped dead in my tracks, pondered the suggestion, and had a Eureka! moment. That was exactly what was happening. Pretty much all of Ostinato’s arguments were along the lines of you say so, but isn’t it possible that . . . or but you can’t exclude the possibility that . . . And of course he was right. It is possible (though very, very unlikely) that the 9/11 attacks were an inside job. And no, I cannot categorically state that vaccines never, ever cause autism. But so what?

I changed strategy and explained to Ostinato that he was racking up a number of rhetorical victories, nothing of substance. Yes, I conceded, it is true that for most things (in fact, for any statement that is not mathematical or purely logical) there is always the possibility one is wrong. But usually we don’t make decisions based on possibility; instead, we use the much more refined tool of probability (estimated to the best of our abilities).

I tried to make the point by drawing two diagrams, like this:

The graphs illustrate two hypothetical probability distributions for a set of events, with the probability estimate on the vertical axis and the type of event on the horizontal one. The top diagram represents my relative’s view of the world: he is acting as if all events have equal probability. Not literally—he does understand that some outcomes are more likely than others—but in practice, since he considers mere logical possibilities, however improbable in reality, to be worthy of the same attention as outcomes that are much more likely. It would be as if you asked someone to join you for dinner, and she replied, in all seriousness, I’d love to, assuming the earth doesn’t fall to an alien attack before then. The lower diagram, by contrast, shows how the world actually behaves. Some outcomes have much higher probabilities than others, and the resulting distribution (which doesn’t have to take the shape I drew, obviously) is far from flat. Aliens may attack earth before dinner time, but the possibility is remote, far too slight to preclude your making firm plans.

I therefore resumed my discussion with Ostinato by mentioning Enlightenment philosopher David Hume’s famous statement, to the effect that a reasonable person proportions her beliefs to the evidence,¹ a principle restated two centuries later by astronomer Carl Sagan, in the context of discussions of pseudoscience: extraordinary claims require extraordinary evidence.

A modern version of this principle is what is known as Bayes’s theorem, which we will consider in more detail in chapter 10. For now, suffice it to say that the theorem proves (mathematically) that the probability of a theory T, given the available evidence E, is proportional to two factors: the probability of observing evidence E if theory T were true, multiplied by the probability that T is true based on initial considerations (the priors).

The beauty of Bayes’s theorem is that it updates its results in a recursive fashion, as new evidence becomes available. The result one gets each time one applies the theorem is called the posterior probability and is obtained—conceptually speaking—by updating the priors in proportion to the newly available evidence. Not only that, people have proven that no matter what the initial priors are (i.e., your initial assessment of the likelihood that theory T is right), after a sufficient number of iterations the posteriors converge toward the true value of T. This makes Bayes’s theorem a formidable tool for practical decision making and, indeed, for the rational assessment of pretty much everything. As metaphor, it serves as a good guide for assessing beliefs—which, as Hume advises, should stand in proportion to the (ever changing) evidence.

I concluded my explanation to Ostinato—inspired by Bayes’s theorem and probability theory more generally—by suggesting that when we make an assessment of any given notion we are basically placing a bet. Given the best understanding I have of the vaccine-autism controversy, for instance, I bet (heavily) that vaccines do not actually cause autism. Do I know this for certain? No, because it isn’t an a priori truth of mathematics or logic. Is it possible that vaccines do cause autism? Yes, that scenario does not involve a logical contradiction, so it is possible. But those are the wrong questions. The right question is, is it likely, on the basis of the available evidence? If you had to bet (with money, or with the health of your kids), which way should you bet? I’m not sure I made a dent in the convictions of my relative, but I did my best.

As Fate wished, I had a second chance to observe how people fail to think properly a few months later, in the course of another conversation about science and pseudoscience. This exchange lasted days, on and off on social media, interacting with someone I’ve never met and likely never will meet. The range of topics this time was much narrower than with Ostinato, and far closer to my own areas of expertise: evolutionary biology and philosophy of science. I felt, therefore, that I really knew what I was talking about, providing not just a reasonably intelligent and somewhat informed opinion but an expert one, based on more than three decades of studying the subject matter at a professional level.

Predictably, it didn’t help. Not in the least. My interlocutor—let’s call her Curiosa—is an intelligent woman who has read a lot of stuff on evolution in particular and science more generally. She has also read several of my blog posts, watched some of my debates, and even bought one of my books on evolution. She discovered me by way of reading creationist Michel Denton’s Evolution: A Theory in Crisis, which cites me several times as a reluctant critic of evolutionary theory—one of those people who know that there is something seriously wrong with Darwinism, yet somehow can’t let go of the orthodoxy and embrace the revolution.

My actual position on the topic is easy to check online, in several places, and it boils down to this: evolutionary theory has evolved by way of several episodes, from 1859 (original Darwinism) to the 1930s and ’40s (the so-called Evolutionary Synthesis) through current times (what is known as the Extended Synthesis), and it will likely continue to do so. There is nothing wrong with Darwin’s original twin ideas of natural selection and common descent, but in the subsequent century and a half we have added a number of other areas of inquiry, explanatory concepts, and of course empirical results. End of story.

Not according to Curiosa. She explained to me that Darwinism is a reductionist theory, apparently meaning something really bad by that term. I explained that reductionism is a successful strategy throughout the sciences and that when it is properly done, it is pretty much the only game in town to advance our knowledge of the world. It really amounts to saying that the best way to tackle big problems is by dividing them into smaller chunks and addressing one chunk at a time, properly aligning small pieces of the puzzle until the full picture comes back into view.

But, countered Curiosa, how do you then explain the bacterial flagellum? This was a reference to Darwin’s black box, a notion advanced by intelligent design creationist Michael Behe. You know, Behe is a scientist! With a PhD!! Working at a legitimate university!!! How do you explain that, Professor Pigliucci?

Well, I said. If you wish I can walk you through several papers that have proposed likely, empirically based scenarios for the evolution of the bacterial flagellum. As for Behe himself, you will always find legitimate academics who position themselves outside of the mainstream. It’s a healthy aspect of the social enterprise we call science, as we will see later in this book. Occasionally, some of these people range far from consensus opinion, into territory that is highly questionable, or even downright pseudoscientific. Some consider themselves rebels or mavericks. Some tend to put their ideology (usually religious, but sometimes political) ahead of reason and evidence. The latter is the case for Behe, a fervent Catholic who simply can’t wrap his mind around the conclusion that life originated and differentiated through purely natural means, no gods required.

Ah!, continued Curiosa, if that’s the case, how come there is so much disagreement among scientists about evolution, and even the origin of life? Well, I replied, let’s begin by distinguishing those two issues. First, there is not widespread disagreement about Darwinism among evolutionary biologists. Pretty much all professionals I know accept the idea. There is disagreement, but it is over the shape of the current theory, just as in other disciplines. Physicists, too, disagree on cutting-edge questions—but not about Newton or even Einstein.

Second, the reason there are indeed many theories about the origin of life, and truly no consensus, is that the information available is not sufficient for us to zero in on one or a small subset of hypotheses. (We’ll talk about this in chapter 2.) We don’t have, and likely never will have, fossils documenting what happened at the onset of life. The historical traces are, unfortunately, forever erased, which means that our ideas about those events will remain speculative. Even if we are one day able to recreate life from scratch in a laboratory, we will have no guarantee that the path we followed under controlled conditions was the one historically taken by nature on our planet. But so what? Science never promised to answer every question, only to do its best. Sometimes its best is not good enough, and the wise thing to do is to accept human epistemic limitations and move on.

Not at all satisfied, Curiosa shifted topic again: haven’t you heard about Roger Penrose’s quantum mechanical explanation of consciousness? Doesn’t that imply that consciousness is everywhere, that it is a holistic property of the universe? Hmm, I said, with all due respect to Sir Roger (a top-notch scientist), I doubt physicists have a clue about consciousness, which so far as I can see is a biological phenomenon, whose explanation is hence best left to biologists. Besides, I told her, beware of any explanation that invokes quantum mechanics for anything other than quantum phenomena, even when proffered by a credentialed physicist like Penrose. At any rate, I concluded, even if Penrose is right, what does that have to do with Darwinism and its alleged failures?

I think you get the idea. My exchanges with Curiosa continued, becoming increasingly frustrating and eventually downright useless, until I politely pointed out that we were going in circles and perhaps it was time to call it a day.

What did I learn from this exchange? A number of things, none of them boding well for the advancement of rational discourse and public understanding of science. But we need to face reality for what it is—it is the rational thing to do.

First, let me remind you that Curiosa is a smart, well read, and genuinely curious person. Second, because she reads widely, she is exposed not only to what I write—and what truly eminent evolutionary biologists like Stephen Jay Gould write—but to fluff put out by the Behes and Dentons of the world. And she has no way to discriminate, since all of these people have PhDs and affiliations with reputable universities. (This is an issue of trust and expertise, the topics of chapter 12.)

Third, while we always assume that knowledge is an unqualified good, it turns out that a bit of knowledge may do more harm than complete ignorance. When someone as intelligent as Curiosa thinks she understands enough to draw conclusions, she will not hesitate to do so, rejecting expert opinion outright in the name of making up her own mind as an independent thinker. When this has to do with the status of evolutionary theory, not much harm is done. But when it has to do with, say, climate change or the safety of vaccines, that’s an altogether different, and far more dire, story.

Fourth, Curiosa has fallen for the well-known technique of spreading doubt about mainstream science, to the extent that people genuinely cannot make up their minds about what is going on. This was the deliberate strategy of the tobacco industry in its absurd (and lethal, for many people) denial of a link between smoking and cancer, well described in the book and documentary Merchants of Doubt. The same approach has been used by other partisans to sow doubts about climate change, vaccines, and so forth. And of course it has also been the main strategy behind the so-called intelligent design movement.²

Fifth, and rather ironically, Curiosa has absorbed and internalized the vocabulary of skeptical (i.e., pro-science) organizations, accusing me and others of perpetrating all sorts of logical fallacies, a convenient shortcut that saves her the trouble of actually engaging with my arguments. For instance, when I pointed out—reasonably, it seemed to me—that Discovery Institute fellow Jonathan Wells is a member of the Sun Myung Moon’s Unification Church, and that his antipathy toward evolution is entirely ideological in nature, I was accused of committing an ad hominem attack. When I pointed out plenty of reliable sources on evolutionary theory, I was demonstrating confirmation bias. And so on.

Lastly, Curiosa’s spirited discussion with me was clearly fueled by her pride in taking on Big Science and its Orthodoxy, in favor of open-mindedness and revolution. She saw herself as David, and I was the Goliath to be slain.

I’m afraid there is little I or anyone else can do for the Curiosas of the world. If—and it’s a big if—they ever manage to get their heads clear about what is and is not legitimate science, they will have to do it on their own, painfully and slowly. The resources are readily available, at their disposal (this book being one of them). But they often have no psychological incentive to do so.

What can and ought to be done instead is to act at two levels. To engage in public outreach aimed at those who are not as far gone as Curiosa, hoping to retain them and even strengthen their resolve to support sound science. And to do a far better job than we do now with the next generation. It is children we should target—as our antagonists know well. It is no coincidence that creationists write lots and lots of books for the young. But there is little incentive for scientists and science popularizers to do so, because children’s literature is seen as somehow inferior to that aimed at adults (even though it is arguably harder to pull off), and because we won’t see the results for decades. (The reason I don’t write children books is that I’m honestly not good at it. It’s a whole different skill set and much harder than it looks.) Science, and reason in general, thus remains—in the beautiful metaphor proposed by Carl Sagan—like a candle in the dark. Our urgent job is to prevent its being snuffed out by the forces of darkness.

That said, both Ostinato and Curiosa have a point, and it is important to acknowledge it: they intuitively understand the impossibility of arriving at absolutely certain knowledge, something that human beings have been after for quite some time, without success. It is always possible—and even reasonable!—for them to doubt a given scientific conclusion, especially one that they happen to dislike or disagree with, no matter how much evidence supports that conclusion.

One of the earliest demonstrations that certainty isn’t something human beings can reasonably aspire to was given by the ancient skeptics. Sextus Empiricus, for instance, articulated the argument back in the second century CE.³ But there is a more modern version that is easily understood. It relies on three alternative paths to certain knowledge, each of which is shown to be a dead end.⁴ Let’s say someone tells us that something is absolutely true. We are, of course, well within our rights to ask him how he knows that. He can answer in one of three ways:

1. A circular argument, where at some point the theory and the alleged proof support each other, however indirectly.

2. An argument from regression, in which the proof relies on a more basic proof, which in turn relies on an even more basic proof, and so on, in an infinite regress.

3. An axiomatic argument, where the proof stems from a number of axioms or assumptions that are not themselves subject to proof.

It should be clear why none of these options is good enough if one’s objective is to arrive at certainty, because none provides an ultimate justification for the claimed knowledge. And I should immediately add that these are the only three modes available, not just in the case of deductive logic (which means most of mathematics), but also in the case of inductive inference (which means the rest of math and all of scientific as well as common knowledge)—in other words, pretty much all of what we count as human knowledge.

There are, of course, different ways of biting the bullet, and they correspond to some of the major schools of what in philosophy is known as epistemology, the discipline that deals with theories of knowledge.

Say you find the first option (circularity) the most palatable—or the least distasteful. Then you are what is known as a coherentist, arguing that knowledge is like a web of interconnected beliefs, each reinforcing the others. If instead you prefer infinite regression, you are, not surprisingly, an infinitist (which as far as I know is not a popular position among epistemologists). Finally, if your taste agrees more with the idea of unproven axioms, then you are a foundationalist, someone who thinks of knowledge as built, metaphorically, like an edifice, on foundations (which, by definition, cannot be further probed).

What if none of the above does it for you? Then you can go more radical. One approach is to be a fallibilist, that is, someone who accepts that human knowledge cannot achieve certainty but believes we can still discard certain notions because they have been shown to be false—as in Popper’s falsifiability criterion (which we encountered in the Introduction).

Popper himself, who wrote in detail about this stuff, opted for a mixed approach: he thought that a judicious combination of foundationalism (i.e., assuming certain axioms), regress, and perceptual experience is the best we can do, even though it falls short of the chimera of certainty.

The impossibility of certain knowledge does not imply that we cannot make objective statements about the world, nor that we are condemned to hopeless epistemic relativism. The first danger is avoided once we realize that—given certain assumptions about whatever problem we are focusing on—we can say things that are objectively true. Think, for instance, of the game of chess. Its rules (i.e., axioms) are entirely arbitrary, invented by human beings out of whole cloth. But once the rules are agreed upon, chess problems do admit of objectively true solutions (as well as of a large number of objectively false ones). This shows that arbitrariness is not equivalent to a lack of objectivity.

The second danger, relativism, is preempted by the fact that some solutions to any given problem do work better than others (whatever the criterion for working is). It is true that engineers have to make certain assumptions about the laws of nature, as well as accept the properties of the materials they use as raw facts. But it is equally true that bridges built in a particular way stay up and function properly, while bridges built in other ways have a nasty tendency to collapse. Not all bridges are created equal, and not all have the same probability of collapsing.

In sum, it looks like the quest for certainty, which has plagued both philosophy and science since the time of Plato, is doomed to failure. Admitting this honestly could perhaps help us with the Ostinatos and Curiosas of the world. How? Let’s start by taking a fresh look at Aristotle’s writings on rhetoric.Rhetorical is an adjective often used, these days, most often to describe empty or deceptive speech. And we will encounter plenty of examples of that type of rhetoric in this book. But according to Aristotle, rhetoric is simply the science of persuading people of the likely truth of reasonable propositions (as opposed to convincing them to do whatever is convenient for you, the goal of advertising or political campaigning).

Aristotle wrote that there are three interconnected aspects of a good rhetorical argument: logos, ethos, and pathos. Logos means getting one’s facts and arguments right, as much as it is possible for a human being to do. After all, we are not trying to sell snake oil. But facts and logic are not enough. To persuade people, one also needs the other two components. Ethos is about establishing one’s credibility with the audience. That’s why so many book authors make sure the reader appreciates that they have a PhD in whatever discipline they are writing about. Or why your doctor prominently displays diplomas from medical schools in her office. Finally, there is pathos, the ability to connect emotionally with the target audience. I must emphasize here that Aristotle is not talking about emotional manipulation, which is obviously unethical. He is simply saying that you have to make people care about what you are saying.

Suppose someone wants to talk to a group of climate change skeptics. To begin with, the speaker must make sure that she has the facts and logic straight, insofar as it is possible (logos). Next, she needs to have credentials that bear on the topic of discussion. Being a generic scientist is not enough; she needs to be an atmospheric physicist, or something close to it (ethos), because climate science is a highly specialized and technical field, like quantum mechanics or evolutionary biology. Lastly, she has to explain that her audience, their children, their towns, and their businesses are all going to be affected by forthcoming changes in the climate (pathos). The issue, in other words, is not just academic. It affects people’s lives. Too many scientists think they only need logos, and even disdain the other two components of a good rhetorical argument, especially pathos. This is a mistake that needs to be corrected. Perhaps, then, we should reintroduce the study of rhetoric as a positive discipline in high schools and colleges. It may make the world a more caring and reasonable place to live in.

CHAPTER TWO

Hard Science, Soft Science

You know my methods. Apply them.

—Sherlock Holmes, The Sign of the Four

You can observe a lot by just watching.

—Yogi Berra

Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants." Candid words about the nature of the scientific enterprise as seen from the inside by a participating scientist. What makes these sentences even more remarkable is that they were not uttered behind closed doors in a room full of smoke, but printed in one of the world’s premiere scientific magazines, Science.¹ It was 1964, the year I was born, and the author was John R. Platt, a biophysicist at the University of Chicago. The debate between scientists on what constitutes hard (often equated with good, sound) and soft (implicitly considered less good) science has not subsided since, and it provides us with our first glimpse into how difficult—and contentious—it is to characterize science itself.

Platt was frustrated by the fact that some fields of science make clear and rapid progress, while others keep mucking around without seeming to accomplish much. As he put it, "We speak piously of . . . making small studies that will add another

Enjoying the preview?
Page 1 of 1