Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Good Faith: Beliefs Have Consequences
Good Faith: Beliefs Have Consequences
Good Faith: Beliefs Have Consequences
Ebook518 pages6 hours

Good Faith: Beliefs Have Consequences

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Beliefs have consequences. Our beliefs about life's "big questions"--Who am I? How should I act? What's my purpose for living?--impact our lives and the lives of people around us. Our answers should take into account scientific explanations of our world and our species, but answers to existential questions are matters of values, not empirical facts. Our answers are the lenses through which we observe and make sense of ourselves and our experiences, lenses developed from attitudes and assumptions absorbed from parents, friends, and cultures, and also from religions and secular ideologies. We have choices, and the lenses we choose to wear shape our day-to-day decisions and interactions. Good Faith examines the choices--various answers with their embedded assumptions and values--and assesses the likely results if people lived according to those answers. Flourishing is the criterion. Do our answers enhance or diminish well-being, for ourselves, our communities, and all humanity?
LanguageEnglish
Release dateMar 9, 2023
ISBN9781666749045
Good Faith: Beliefs Have Consequences
Author

Roger R. Adams

Roger R. Adams is a clinical psychologist also trained in ministry. His writings have explored issues of science, morality, and faith.

Related to Good Faith

Related ebooks

Religion & Spirituality For You

View More

Related articles

Related categories

Reviews for Good Faith

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Good Faith - Roger R. Adams

    Chapter 1

    Religions, Passing or Failing?

    Rev. Goodman sighed as the executive team left his office. The news had not been good: attendance and giving had declined, again. Going to his desk, he was startled to see a manila envelope in the center of the blotter. It hadn’t been there before the meeting. The envelope contained only a piece of heavy paper folded in half. Rev. Goodman pulled it out and saw Report Card and Elm Street Church printed in fancy script. He smiled: someone had gone to a lot of trouble to play this joke. But then he opened the card and read, Provides meaning for people’s lives: D. He scanned more categories—Shows love for all people: F and Reduces divisions among people: F. Then he skipped to the signature at the bottom. God was written there, followed by We need to talk about this.

    Elm Street is not the only church due for review. Participation in mainstream religions in the US has been steadily declining and the percentage of people who say they have no religious affiliation has been rising, especially among younger adults.¹ Expressing their opinions with their feet, many people are saying that religions aren’t relevant to their lives.

    Yet basic (mysterious, amazing) facts of human life haven’t changed: we are alive, we are conscious of being alive, and we know that the beginning and end of our lives are mostly outside our control. These facts still expose our mortal limits and still present fundamental questions: Who am I? Why am I here? What should I do? Does death end everything? People still need answers that will give their lives direction and purpose.

    Religions still offer such answers. So why have so many people turned away? Perhaps people have bought into the ideology that the marketplace is the answer and human life is all competition. Perhaps existential questions don’t come to mind at all because the fortunate are too comfortable or self-satisfied and the unfortunate are too busy staying afloat. Perhaps people distrust religions because some versions have covered up sexual abuse, promoted hateful politics, and engaged in terrorism. Perhaps people feel religions are not needed (and are intellectually suspect) because science is the only legitimate source of answers.

    The most strident critics, the new atheists—Sam Harris, Richard Dawkins, and Christopher Hitchens, for example—have made a crusade of discrediting religion. Their list of allegations is long:

    1.Religion is destructive for society.

    •Religion maintains the status quo and contributes to social control by elites.

    •Religion is anti-freedom; it opposes freedom of thought, freedom of behavior, and freedom from bonds of obsolete tradition.

    •Religion creates divisions among people and promotes hatred and violence.

    2.Religion is destructive for individuals.

    •Religion undercuts self-esteem; it induces feelings of guilt and insignificance.

    •Religion plays on fear, particularly fear of death and eternal punishment.

    •Religion encourages dependency, a childish longing for Daddy’s protection and rescue.

    3.Religion is destructive for one’s mind.

    •Religion is anti-reason and anti-science.

    •Religion requires belief in impossible events.

    •Religion is based on superstition and other magical thinking.

    This book will not offer a blanket defense of religions. Perhaps to the dismay of some people of faith, I will grant that the accusations against religions contain kernels of truth—at least for some expressions of religions at some times and places. On the other hand, it is likewise true that some versions of religions have been quite beneficial for people.

    All of this leaves a faith community with a threefold task: rebut the accusations, clarify the value of its message, and demonstrate relevance by connecting with people’s deep longings for loving relationships, worthwhile purpose, and a sense of peace. All three pieces depend significantly on a faith community’s answers to existential questions. These answers—not dogmatic formulations, but answers embodied in what the faith community preaches and practices—can be good for people, or not. In the pages to follow, we will explore various types of answers proposed by religions and also by science and philosophy. We will examine positive and negative effects that different answers have on individuals and societies, and the results of the assessment will identify characteristics that differentiate between helpful and harmful versions of religions.

    Existential questions cluster into three broad topics: the natural universe, the social environment, and the meaning of our individual lives. Here’s a short list of FAQs:

    1.The natural world:

    •How did the universe come to be?

    •What is the nature of the physical world?

    •Why do events happen as they do?

    2.Fellow humans:

    •How did people come to be?

    •What is the nature of human beings?

    •How should we act toward one another?

    3.Oneself:

    •What is the purpose for my being alive?

    •Why do I have to die, and what comes after death?

    At their best, religions offer answers that ground life’s meaning in a relationship to something greater than oneself—and not just anything greater, but the one greatest, the ultimate source of life, which is the only foundation that in the end is not rendered absurd by death. Religions also draw together groups of people who share the quest for meaning, and these groups, hopefully, foster bonds of mutual caring and accountability. Moreover, religions’ potential benefits are not limited to individual congregations and their members; religions can also be helpful for larger communities—cities, regions, nations, and the whole interconnected world. Congregations can nourish social ties that hold societies together, and religions are repositories for values that affect all participants in a society, including people who don’t identify with those religions. A religion that emphasizes prosocial values—nonviolence, fairness, caring for others, and so on—encourages all people to have kinder relations with one another. And such a religion is in a position to call out a society’s failings and exhort communities to operate more humanely.

    Much has been made of the alleged conflict between science and religions, and compatibility between them will be a prominent theme going forward. In our investigation of existential questions, we will see that science can tell us a great deal about how nature works, how people came to be, how people think and feel, and how societies operate. Science is the best system available for explaining empirical facts about the universe in which we live. Besides having confidence in the methods of science, we also accept science’s explanations because they work in the real world, as evidenced by technologies derived from the explanations. We will see that some religious answers to existential questions are indeed compatible with science. Where conflict exists, it is generally due to a failure to distinguish between empirical facts and metaphysical opinions or due to unnecessary claims: scientism’s claim that the empirical world studied by science is all there is, so science is the only valid source of knowledge and all questions must be answered only by science, versus fundamentalist religions’ claim that sacred writings are literally true as sources of factual knowledge about the world and history. I say these claims are unnecessary because the value and validity of science are not diminished if there are subjects (such as purpose and values) that are outside science’s purview, and the value and validity of religious faith are not diminished if scriptures are understood metaphorically rather than literalistically. Science and religious faith are not mutually exclusive; a person can choose both. A person can, with integrity and intellectual rigor, fully accept the findings of science and at the same time be committed to a religious faith.

    As we identify characteristics that make a religion helpful or harmful, we will come to an underlying issue: where do the criteria come from? On what basis can anyone say that a particular feature of a religion is good or bad? It would be nice if we could anchor the criteria in absolutely sure, logically necessary, universally accepted moral or metaphysical principles; alas, there are no such certainties. Moreover, religions (and other belief systems) tend to be self-contained. To a group of believers, their god is ultimate and their religion is god-mandated, so there can be no higher standard, no valid outside criterion for judging their religion. Despite the absence of unquestionable universal principles, we do not need to resign ourselves to relativism. We can each take a stand—and then try to persuade others.

    A faith community’s answers to existential questions flow from its understanding of the nature of Ultimate Reality (whether known as God or by another name), but this book does not judge the belief in God; rather, it assesses the answers according to their consequences for people. I hope to persuade you that the effects-on-people standard is reasonable and useful. All of us, both people of faith and nonreligious people, have a stake in the assessment, because we all are members of the same neighborhoods, cities, nations, and world.

    Before proceeding, we need to cover some preliminary points. To begin, there are religions, but not religion. Though one might think that since we have a noun, religion, that covers all religions, there must be some defining characteristics that make something a religion rather than, say, a political party or a social club. If some of those essential characteristics were unacceptable, then all religions could be condemned with one stroke. But religion is a category that does not have a sharp boundary. Belief in a supernatural being is often suggested as the essence, but in fact not all religions have a god or gods, and, as we will see, even a theistic religion such as Christianity may not understand God to be a supernatural being. Religion is a category populated through similarities, not rules. The label religion is given to associations or belief systems when they have enough similarities to things already deemed to be religions, not because they pass a litmus test.

    Some scholars dismiss religion, religions, and/or world religions, as meaningless terms or as artifacts of Western intellectual imperialism.², ³ However, the lesson to be learned from research into religions’ entanglement with racism or colonial domination (for example) is not that the term religion is irredeemably tainted, but that one should be responsible in using it. When people (including scholars) encounter something new and try to understand it, they refer to their existing sets of mental categories and determine whether the new thing fits or whether one or more categories must be created or adjusted. This is the natural course of learning. When a person encounters unfamiliar beliefs or practices, the same thing happens. The person tries out a category such as religion to see whether the beliefs or practices fit in, and the answer may be yes, yes, but, or no. The danger in yes is that the person may not have adequately perceived features of the new beliefs or practices, so that calling them a religion—applying the person’s prior concept of religion—may leave the person with a distorted picture. With yes, but the person recognizes that the prior concept is close, but doesn’t quite fit, so the prior concept is altered. Perhaps the person concludes that a set of beliefs and practices can be a religion even if, for example, there is no god invoked in them. No might seem like the safest answer, but seeing unfamiliar beliefs or practices as needing a brand new category risks missing similarities with prior concepts, and those similarities might contribute to greater understanding. Being responsible in using the label religion requires sincere effort both to be clear about one’s own prior concepts and also to understand unfamiliar beliefs or practices in depth and from the practitioners’ own perspective.

    For our discussions we will need a term to stand roughly for religions and other systems of beliefs and practices that assist people in dealing with existential questions. Generally that term will be religions. This usage is not a back door through which I’m sneaking in a definition of religion; rather, it is a way to make our discussions more concise while pointing to this book’s central concerns.

    Furthermore, we will need to keep in mind that even a single religion is not one monolithic thing. Even within one sect of one tradition, there are as many varieties as there are congregations, even as many as there are individual believers. Each person has a unique understanding of the faith, developed from many sources: values taught and modeled by parents, opinions of peers, personal experiences with faith communities, one’s own reading and reflection, and so on. This obviously is the case with believers, but nonbelievers, too, have been exposed to teachings, opinions, and experiences and have arrived at their own images of religions.

    The upshot is that in the debates between religions’ detractors and defenders, all parties have a duty to be clear about their individual ideas of religions. Detractors must be specific about the teachings and practices they are criticizing, and defenders must be specific about how their teachings and practices are different. Being specific helps accusers resist the temptation to set up straw men, and it helps defenders resist glib, global denials. Being specific counteracts the tendency to talk past one another; it clarifies issues and fosters a real exchange of ideas.

    Finally, at many places in this book my point of view will be explicitly Christian, drawn in particular from mainstream Protestant Christianity, which is the religious tradition in which I have spent my life. This limitation comes not from an assumption of exclusive rightness for that tradition, but from lack of sufficient knowledge to speak in detail about other faiths. Likewise, at times I will use God language without adding a disclaimer that any anthropomorphic terms are meant metaphorically and without a note that nontheistic views of the Ultimate are included. I hope my intent will be understood even without repetitive reminders. In any case, I mean no disrespect for others’ faiths, and I urge you to compare my views with perspectives from your own tradition. I look forward to an ongoing dialogue.

    1

    . Pew Research Center, America’s Changing Religious Landscape.

    2

    . McCrary, World Religions.

    3

    . Masuzawa, Invention of World Religions.

    Chapter 2

    A Brief History of Reason and Faith

    People have wrestled with existential questions since long before recorded history. You might ask, how do we know what prehistoric people thought, since there were no written records? Archeological evidence suggests that Stone Age humans in the Upper Paleolithic Era (roughly forty thousand to ten thousand years ago) or possibly even earlier developed religious practices. For example, discoveries of evidence of intentional, ritual burials—as opposed to mere disposal of corpses—indicate that early humans, as far back as one hundred thousand years ago, thought about death and attributed such significance to it that they treated the bodies of their deceased fellows with special care and customs. ⁴, ⁵

    Moreover, for as long as people have been asking these questions, the answers they have sought have required both a how and a why, that is, both causal mechanisms and purposes. We humans don’t feel we really understand an event, whether birth, death, or all of creation, unless we have explanations of both the processes by which it came about and the reasons for it. Understanding seems incomplete without both. As we delve into existential questions, we will look at both. Generally we will start with how and rely on science for that input, then we will look at why interpretations suggested by religions and philosophies.

    Scientific knowledge is often taken to be the sole basis of true understanding, yet its domain is only the how, the mechanisms by which events happen. Purpose is beyond its purview. The outstanding success of science in describing mechanisms has led some scientists and philosophers to dismiss purpose, to claim that there are no purposes, only mechanisms. In this view, the desire for a purpose is merely a quirk of our cognitive makeup, since purpose is not an intrinsic part of reality. In opposition to this view, others have taken the position that people’s purposeful intentions are real, and if the nature of the universe is to produce beings that have intentions, then there must be some aspect of existence that includes goals and meaning. This conflict over purpose is the deepest issue in the science-religion debates. Other issues—sources of knowledge, claims of truth, and the asserted sole validity of materialism—are intertwined with the question of purpose.

    For our exploration of these issues it will be important to have a picture of how science and religion each operate, including the psychological and sociological dynamics in play for each, and also to have an appreciation for the history of the relationship between reason and faith.

    2.1. The Era of Faith over Reason

    Long before the age of science, the human faculty for conscious, deliberate thought was given preeminence among human abilities. For example, in Aristotle’s view, reason was the purpose of life, and reason was to rule over emotions, needs, and desires.⁶ And for just as long the relative positions of reason and religious faith have been contested. Impiety was among the crimes for which Socrates was executed.

    Human beings are subordinate to God, so, according to theologians over the centuries, reason should be subordinate to faith. Yet reason can be useful to faith. Theologians have often borrowed ideas from the popular contemporary philosophies in order to explain religious teachings—as when the Apologists in the second century defended the church against social and philosophical criticisms, asserted that Christianity was the one true philosophy, and elucidated Christian teachings using concepts from several Greek philosophical schools.⁷ For instance, the Apologists described the relationship between God and Jesus in terms of Logos (Greek for word or reason), particularly Logos as the Stoics understood it, a divine principle of order that upholds and infuses all of existence. That insight has aided believers and influenced theology ever since.

    Reconciling reason and faith was likewise a concern of Augustine, bishop of Hippo in Roman North Africa. He summed up his conclusions in the phrase believe that you may understand.⁸ But the Latin word generally translated as believe, credere, at that time meant to commit oneself to a path or purpose, rather than to give intellectual assent to a proposition. So in Augustine’s view, his faith in God came first, and he then used the power of reason to understand what that commitment meant.⁹ Augustine also cautioned against setting religion against science. He warned that if Christians quoted scripture to dispute established facts about nature, nonbelievers would scorn them and attribute their ignorance to their religion.¹⁰

    Thomas Aquinas reinterpreted Christian beliefs using Aristotelian concepts, and this new approach became the predominant theology of Western Christianity in the Middle Ages. For Thomas, belief was an act of will, but nevertheless a reasoned, reasonable act. Subsequent developers of Thomistic theology elaborated the idea of two sources of knowledge: knowledge could come from either inspired revelation or natural reason, but there could be no conflict between the two because revelation was the higher order of truth.¹¹

    As reason began to be associated less with deductive logic starting from supposedly self-evident principles and more with the observation-based theories of natural science, the Thomists’ two sources of knowledge came to be conceived slightly differently. Divine revelation remained much the same, but the second source of knowledge was not abstract philosophy, but the natural world. Nevertheless, the two sources were still seen as compatible, because God was the ultimate basis of both.

    2.2. The Nature of Science

    Science is a particular type of reasoning. The heart of reason is conscious thought, particularly deliberation leading to a judgment or decision. Roughly speaking, reason has two main tools, deduction and induction. Deduction, or a priori reasoning, is the top-down process of starting with given facts (prior assumptions) and applying rules of logic to reach a conclusion. If the premises are true and the rules of logic are applied correctly, then the conclusion must be true. Mathematics works that way. The mathematician assumes some axioms—either because they seem self-evident or because the mathematician wants to see where they would lead if they were true—and then uses logic to prove further propositions (about numbers, geometry, and so on). Of course, the truth of the conclusions depends on the validity of the assumptions, and therein lies the source of many philosophical disagreements.

    Induction, or a posteriori reasoning, on the other hand, is a bottom-up process. From specific examples (observations, data, events, etc.) one forms general conclusions. Science is often thought to operate in this manner; after all, its conclusions are based on observations. But scientific knowledge does not arise directly through induction. Rather, science uses observations to test hypotheses (predictions) that have been deduced from theories. This point, elaborated below, is essential for understanding the strengths and weaknesses of science.

    Of course, human beings (and other animals) do learn directly from experience. This is evident, for example, when an animal returns to a location where it found food before. And from prehistoric times people knew that dogs and horses (to name just two species) could be taught to do what people wanted them to do. In the early twentieth century Ivan Pavlov demonstrated that when a bell repeatedly preceded dinner, a dog would come to salivate at the sound of the bell even in the absence of food. No conscious process was involved. Salivating in response to food was a reflexive, physiologic response for a dog. What Pavlov showed was that through repetition this reflex could be conditioned to occur in response to an otherwise neutral stimulus (the bell).

    Edward Thorndike originated the study of a different type of learning, called operant conditioning. Whereas Pavlovian or classical conditioning modified the stimuli that could elicit reflexive responses, operant conditioning modified nonreflexive behavior. Thorndike showed that a reward received after a particular behavior—say, a dog’s sitting and lifting a paw to shake hands—would strengthen that behavior (make it occur more frequently) and, likewise, a punishment would weaken the behavior. B. F. Skinner expanded the analysis of reward and punishment and explored how operant conditioning could be used with people to modify problematic behaviors.

    Conditioning, whether operant or classical, uses repeated experiences to instill unconscious learning. But human beings are able to be aware of experiences and can consciously recognize patterns of repetition. With consciousness, human beings add a new dimension to learning and become capable of inductive reasoning.

    Explanation, prediction, and the concept of causation are all derived from recognition of patterns of events: when X happens, Y regularly happens afterward. When I drop something on my foot, pain follows. Identifying such a pattern requires selection of the relevant factors out of innumerable details that could describe a situation. From one occasion to the next, neither the circumstances (X) nor the outcomes (Y) are ever exactly the same. The thing dropped might be a rock one time, a hammer another time, or even a bar of soap in the shower still another time (not to mention all the extraneous aspects of the surrounding context: the water temperature in the shower, the time of day, the noises I’m hearing, my preceding mood, and on and on). Likewise for the outcome: the pain’s location (the particular foot or toe) would vary, as would the type of painful sensations (e.g., the feeling of a cut versus an abrasion versus a bruise) and their intensity and duration, and so on. From the myriad details I identify the common elements, namely the falling and impact of objects with some degree of hardness and weight followed by an extremely unpleasant feeling in my foot. Seeing the pattern and observing that it is reliable—Y always follows X (or nearly always)—allows me to predict a bit of the future with some confidence: if X should happen, then Y will follow. My confidence in this prediction grows if I can explain why Y follows X, that is, if I can describe the causal connection between the dropped object and pain. (The bar of soap acutely compresses the flesh on the top of my toe, and the compression triggers certain nerve cells, which transmit signals to the brain, which organizes the signals into a conscious sensation of pain.)

    People employ this human talent for pattern recognition, causal explanation, and inductive prediction constantly in everyday life. However, there is no solid foundation for the underlying assumption that a pattern will continue in the future. This is the problem of induction. David Hume was not the first philosopher to note the problem, but he authored the most famous analysis of it. Hume showed that inductive assumptions cannot be verified. One might argue that the originally observed pattern occurred because the first event had the power to cause the second event (the lightning had the power to cause the thunder), and because that power is inherent in first event, it will always have that power in the future. But the attribution of causal power to the first event is itself based on an inductive assumption. Similarly, one might argue that the inductive assumption rests on a principle, namely the principle that nature’s laws don’t change. But that principle, too, is either an unsupported postulate or is once again an assumption that depends on induction. Or one might argue that inductive assumptions are valid because they work; that is, we can trust such assumptions because experience has shown that many different types of patterns do recur as expected. But this pragmatic argument is merely a generalization of the inductive assumption from one pattern to a pattern of patterns. In short, there is no independently justified principle of induction to head off an infinite regress of inductive assumptions.¹²

    If induction cannot be verified, on what can scientific knowledge be based? In his landmark book, The Logic of Scientific Discovery, Karl Popper proposed that science was different from other types of knowledge and study by virtue of its methods, particularly its intentional efforts to falsify its theories. According to Popper, theories do not emerge spontaneously (inductively) from collections of facts or experiences. Rather, theories or hypotheses arise from the mind in the form of guesses, hunches, inspired insights, and the like. Past experiences surely play a part in preparing the mind for such leaps of creativity, but the psychological process is not one of inductive reasoning. However, the sources of theories do not matter, because the acceptance of a theory does not depend on its origin but on its ability to stand up to testing. A theory about nature cannot be verified, that is, proven to be absolutely true, but it can be proven false. It can’t be proven true, because it is impossible to observe or test all of the infinite number of cases to which the theory would apply. On the other hand, observations can contradict a theory and thereby prove it false.¹³

    But there are many theories that have not been thus falsified; are they all equally certain or uncertain? No, Popper says, because a theory can be corroborated. (Some would say confirmed, but Popper dislikes that term because it carries potentially misleading connotations similar to proven.) Moreover, among competing theories one may be better corroborated than another. One cannot say that one theory is more probable than another, because the tests that corroborate a theory do not yield a numerical probability that the theory is true. On the other hand, one may demonstrate that a certain theory has successfully survived more rigorous testing than another theory.

    Rigorous testing consists of deliberate, concerted attempts to falsify a theory, in particular by deriving from the theory specific predictions as to what should occur under certain circumstances, and then arranging to observe what in fact happens when those circumstances are brought about. If my biochemical theory predicts that a certain new drug should kill lung cancer cells, then I can test the theory by obtaining such cells (perhaps by culturing them from samples taken in biopsies), exposing them to the drug, and seeing what percentage die, compared with the percentage that die among cells not exposed. If a much greater proportion of the treated cells die, then the observation agrees with the prediction, and the experiment corroborates the theory.

    If two theories vie to explain the same phenomena, the scientist attempts to find circumstances for which the theories make different predictions as to what will occur. Then the scientist conducts an experiment by arranging those circumstances. The outcome, ideally, will then agree with one theory’s prediction and not the other’s, thus corroborating the first theory and falsifying the other. Of course, scientists do not decide for or against a theory on the basis of just one experiment. An essential rule of scientific methodology is that findings must be reproduced by other observers in order to be accepted.

    Popper pointed out that the more universal and specific a theory is (the broader the range of phenomena it explains and the more exact the predictions that can be drawn from it—both desirable qualities), the more content it has. The more content a theory has, the greater are the variety and precision of the predictions that can be deduced from it, and therefore the more opportunities there are for possible falsification. Furthermore, the more opportunities for possible falsification there are, the more rigorous the testing of a theory can be; and the more rigorous the testing is, the stronger the theory is (if it passes the tests). It might seem paradoxical that greater vulnerability to falsification would be a virtue for a theory, but consider the opposite: a theory can be impregnable to falsification only if it says nothing about the world.

    Science’s methods provide strong support for its findings. But science also has limits. The rule of falsifiability itself implies a level of uncertainty: all scientific knowledge must be regarded as tentative and approximate to some degree, because future observations might disprove current explanations. It should be noted, however, this theoretical uncertainty does not mean that scientific knowledge is shaky or untrustworthy in an everyday sense. While no scientific theory can be proven to be absolutely true, many theories, including the theory of evolution, the theory of relativity, and quantum mechanical theory, have been tested in so many ways and have been able to predict observations so precisely that they are very solid. One can rely on them; it’s safe to live life and make plans on the assumption they are true.

    Theoretical falsifiability is based on the possibility that future observations might disagree with a current theory, but future observations have practical constraints. Observations depend to a considerable extent on resources and technology, that is, on time, money, and better instruments, and the deeper one goes into any area of science, the more expensive those necessities seem to become. Budgets limit knowledge.

    Beyond such mundane issues, however, there are additional reasons why scientific knowledge can never be absolutely complete and certain. First, the speed of light limits how much of the universe we can see. The farthest objects we could conceivably detect are only 13.7 billion light-years away (one light year is approximately 5.9 trillion miles or 9.5 trillion kilometers), because the universe is approximately 13.7 billion years old according to recent estimates, and nothing can travel faster than light.¹⁴, ¹⁵ Furthermore, however close or distant an object may be, when we observe it, we’re seeing old news. If today we see in our telescopes a star exploding a million light-years away, that means that the explosion happened a million years ago. The light from the explosion had to travel for a million years to bring the report to us. The bottom line is that there is a whole lot going on in the universe that we don’t know about.

    Second, the quantum nature of matter and energy places limits on knowledge. The Heisenberg uncertainty principle codifies the inherent trade-off between measuring position and measuring velocity of subatomic particles: the more precisely one locates a particle, the less precisely one can determine its velocity, and vice versa.¹⁶ Furthermore, in quantum theory, a particle is represented by a wave function (also known as a state vector), a mathematical expression that gives the probabilities that the particle, if measured, would be found to have a one location or another, one spin direction or another, and so on for other observable characteristics. The probabilistic nature of the wave function is not an artifact of imprecise instruments; it is a fundamental aspect of quantum theory itself. Since position, momentum, and other characteristics of particles do not have single, definite values (only probabilities of values), the future position and momentum are not entirely predictable; they are not completely determined by the prior state. This nondeterministic picture of the universe was what Albert Einstein was objecting to when he famously wrote in a letter to Max Born, He [God] does not play dice.¹⁷ Quantum theory has even more startling implications for how we understand the universe, and we will explore them in the next chapter.

    A third limit on knowledge is Gödel’s incompleteness theorem. Kurt Gödel, an Austrian mathematician, proved that any version of number theory (that is not based on contradictory assumptions) must include propositions for which one cannot determine whether they are true.¹⁸ Gödel proved his theorem by showing that number theory must contain self-referential paradoxes, a type of undecidable proposition. The proof was possible because he was able to translate such paradoxes from ordinary language into the symbol system of number theory. An example of a self-referential paradox is the liar’s paradox, often expressed as I am lying or This sentence is false. If one starts by assuming the sentence is true, one ends up with a contradiction, but the assumption that the sentence is false also leads to a contradiction, so the sentence is neither true nor false. It is undecidable within our system of language. Obviously, the fact that the sentence refers to itself is the root of the paradox. Lest you think that something is not quite legitimate about a sentence that talks about itself, the self-reference can be spread out over several statements each of which, taken one at a time, is clearly acceptable. For example, consider the following pair:

    The following sentence is false.

    The preceding sentence is true.¹⁹

    Neither sentence by itself seems to be a problem, yet they are reciprocally referential, and together they again produce the liar’s paradox.

    Gödel’s theorem showed that no formal mathematical system dealing with whole numbers could be both consistent (i.e., free of contradictions) and at the same time complete (able to prove the truth or falsity of all theorems that could be formulated within the system). But there were implications well beyond mathematics. Incompleteness undercut the ancient (and modern) belief that numbers represent properties of the real world. Incompleteness invited playing with axioms, and the results challenged our accustomed ways of thinking about lines, points, quantities, and other basic concepts. In broad strokes, it shook up intuitive ideas about numbers just as quantum physics and relativity shook up intuitive ideas about objects, identity, time, and space.²⁰

    Moreover, Gödel’s work arose from a widespread concern in the fields of mathematics and logic (during the first third of the twentieth century) regarding the nature of proof

    Enjoying the preview?
    Page 1 of 1