Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

The Tests of Time: Readings in the Development of Physical Theory
The Tests of Time: Readings in the Development of Physical Theory
The Tests of Time: Readings in the Development of Physical Theory
Ebook1,424 pages21 hours

The Tests of Time: Readings in the Development of Physical Theory

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The development of physical theory is one of our greatest intellectual achievements. Its products--the currently prevailing theories of physics, astronomy, and cosmology--have proved themselves to possess intrinsic beauty and to have enormous explanatory and predictive power. This anthology of primary readings chronicles the birth and maturation of five such theories (the heliocentric theory, the electromagnetic field theory, special and general relativity, quantum theory, and the big bang theory) in the words of the scientists who brought them to life. It is the first historical account that captures the rich substance of these theories, each of which represents a fascinating story of the interplay of evidence and insight--and of dialogue among great minds.


Readers sit in with Copernicus, Kepler, and Galileo as they overturn the geocentric universe; observe the genius of Faraday and Maxwell as they "discover" the electromagnetic field; look over Einstein's shoulder as he works out the details of relativity; listen in as Einstein and Bohr argue for the soul of quantum mechanics in the Completeness Debate; and watch as Hubble and others reveal the history of the universe.


The editors' approach highlights the moments of discovery that rise from scientific creativity, and the presentation humanizes the scientific process, revealing the extent to which great scientists were the first to consider the philosophical implications of their work. But, most significantly, the editors offer this as their central thesis: although each was ushered in by a revolution, and each contains counterintuitive elements that delayed its acceptance, these five theories exhibit a continuous rational development that has led them to a permanent place in the worldview of science.


Accessible to the general reader yet sufficiently substantive that working scientists will find value in it, The Tests of Time offers an intimate look into how physical theory has been developed, by the brilliant people who have developed it.

LanguageEnglish
Release dateSep 25, 2017
ISBN9781400889167
The Tests of Time: Readings in the Development of Physical Theory

Related to The Tests of Time

Related ebooks

Physics For You

View More

Related articles

Reviews for The Tests of Time

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The Tests of Time - Lisa M. Dolling

    Statile

    Philosophical Introduction

    PHILOSOPHY OF SCIENCE AND PHYSICAL THEORY

    The scientific literature presented in this book possesses a philosophical richness unparalleled in the history of science. Of the many philosophical issues touched upon in this literature, the following three are among the most significant:

    1.  The Existence of Theoretical Entities

    2.  The Problem of Scientific Discovery

    3.  The Evaluation of Scientific Theories

    Readers will assuredly profit by employing these issues as interpretive categories to guide them. Whether the things referred to by theories are real or not is relevant to both their truth and value. Further, familiarity with the creative approaches great thinkers have taken to overcome existing scientific problems, as well as with the evaluation of theories as they progress toward their final formulation, can be of immense value to both the beginning and professional student of the history and philosophy of science.

    In this introduction the issue of the existence of the entities actually referred to by the five physical theories is considered first. This is followed by a discussion of the problematics and pragmatics of scientific discovery as a creative enterprise. Finally the focus shifts to the explanatory and predictive worth of these theories, as well as to the issue of their progressive character in relation to those theories that they supersede.

    The Existence of Theoretical Entities

    What is the ontological status of the micro-objects, processes, states, fields, and other entities referred to by physical theories? Can they be viewed as in some sense real, as truly describing the world, or are they merely convenient fictions of the theory-making process? These are the kindred questions that concern us here. Furthermore, these questions are perennial. From the Forms of Plato, to the adequatio ad rem vocabulary of medieval scholasticism, to the Enlightenment metaphor of the extent to which scientific theories mirror the underlying nature of physical reality, the issue of the correspondence between theory and fact has long been at the heart of epistemological, metaphysical, and scientific concerns. At stake is the attitude we adopt toward the objects of science. This constitutes the well-known Realism vs. AntiRealism debate.

    The Heliocentric Theory

    The developmental history of the Heliocentric Theory, from the time of the establishment of geocentrism in Greek antiquity to its heliocentric transformation by Copernicus, modification by Kepler, corroboration by Galileo, and ultimate vindication by Newton, provides an ample opportunity for highlighting the issue of realism as regards theoretical entities. Our goal here is not to resolve this difficult issue, but only to examine it in the setting of specific physical theories.

    In his unsigned preface to the De Revolutionibus, Osiander reminds the reader of a lesson usually reserved for logical debates. Both a (True → True) and a (False → True) implication are to be logically construed in the same way, as true statements. That is, even if the Heliocentric Theory of Copernicus is only hypothetical, and possibly quite literally false, it can nevertheless, in principle, still accurately and truly describe astronomical appearances. The instrumental lesson here is that the truth of a theory, and the theoretical entities that compose it, are not necessary conditions of its theoretical utility. While the tools of discovery can perform a catalytic role in the development of a descriptively valid theory of reality, they themselves are not required to be rooted in the bedrock of actual fact.

    On the other hand, a number of twentieth-century philosophers have argued in favor of realism as regards theoretical entities. Hilary Putnam has stated that only a realist attitude toward theoretical entities prevents the success of science from being other than miraculous, while J.J.C. Smart has suggested that an antirealist stance toward theoretical entities would amount to an implicit acknowledgment that scientific laws are the result of cosmic coincidence. And yet, despite mountains of evidence in favor of heliocentrism, and its universal acceptance, no person can actually claim to have genuinely observed the entire heliocentric arrangement of the solar system in the same way that one can when it is completely reduced to pictorial form.

    Mathematical devices such as the epicycle, utilized by both Ptolemy and Copernicus, were inventions devised and used with the intent of salvaging circular celestial motions. Until proven false by Kepler and Newton, such celestial motions were deemed to represent the real motions of planets. The crystalline spheres of Aristotle, on the other hand, represented a physical attribute of geocentric theory whose purported existence could not survive precise astronomical observation. The careful scrutiny of stellar and cometary phenomena by Tycho Brahe undermined the status of the spheres as real objects, and acted as a stimulus of discovery for a new post-Aristotelian revolution in physics.

    The main problem for heliocentrism was of course not so much the centrality of the sun but the mobility of the earth. But these articles of heliocentric faith have been found to be in accord with both reason and fact, thus converging upon the reality and seeming irrefutability of the Heliocentric Theory. It should be briefly noted here that within the philosophy of science, it is not correct to speak of an unproblematic demarcation between theoretically derived entities and those that we factually observe. We will not, however, belabor this point. Let us just note that factual and theoretical statements and beliefs are interdependent and intertwined. Establishing the empirical basis of our scientific commitments, especially given the mediation of technologically assisted instruments of observation, such as the telescope and microscope, is a difficult epistemological task.

    The Copernican commitment to a moving earth theory, for example, not only allowed for a better overall account of retrograde planetary motions than did the Ptolemaic theory, but also enabled an estimation of the still problematic distances to the planets. Copernican sore spots, such as those of stellar parallax, and the purported inconsistencies with both Aristotelian physics and Scripture, have all since been eliminated. From the perspective of solar centrality, Kepler’s attempts to delineate a centripetal force emanating from the sun can also be understood as corroborative of his own realistic interpretation of the Heliocentric Theory.

    Johannes Kepler pulled no punches when it came to affirming his commitment to the reality of heliocentrism. In Astronomia Nova (1609), he declared that he had founded astronomy not upon fictive hypotheses, but upon physical causes. It was Kepler who revealed the identity of Osiander, thus indicating that it was not Copernicus who, in a moment of cowardice, had failed to realistically endorse his own heliocentric theory. Ironically, the empirically based epistemology stemming from Newton’s Principia (1687), which secured a kind of secular imprimatur for the views of Kepler and Galileo, would also serve as the cornerstone for the Humean revival of skepticism concerning the reality of causes.

    Electromagnetic Theory

    Any philosophical assessment of the theoretical entities posited by Maxwell’s Electromagnetic Theory will need to focus upon the following two theoretical concerns.

    1.  The existence of electromagnetic fields and waves, which encompass electrical, magnetic, and optical phenomena.

    2.  The existence of an ethereal medium capable of providing a physically viable means of electromagnetic transmission.

    Post-Maxwellian theoretical and experimental work, by Poynting and Hertz among others, would lead to strong support for (1). The negative result of the Michelson-Morley interferometer experiment in 1887 undermined confidence in (2), as did the later Einsteinian redeployment of classical electrodynamics around the postulate that the speed of light (c) remains the same for all inertial systems despite any discrepancies owing to relative motion between a source of illumination and an observer.

    It is not unfair to say that from the time of On Physical Lines of Force (1861)—in which the ethereal model of electromagnetic transmission is described in great detail—until A Dynamical Model of the Electromagnetic Field (1865)—in which the equations of the electromagnetic field are presented in a way that do not rely upon the existence of an ethereal medium of transmission—Maxwell came to recognize that the existential claim of (1) need not depend upon his own individualistic description of an ethereal model (2). But while subsequent scientific work has provided strong support for the view that a realistic commitment to (2) is not a necessary condition for the theoretical efficacy of field theory (1), this does not mean that Maxwell himself took an overall skeptical position regarding the existence of some ethereal medium as necessary for the transmission of electromagnetic waves. One need only note Maxwell’s affirmation of the reality of some ethereal substance in his contribution on this subject for the ninth edition of the Encyclopedia Britannica. Neither was he alone. Albert Michelson would continue to search for the ether for the remainder of his life. And not only did Einstein not banish the ether from physics, despite its superfluity from the perspective of Special Relativity, he would later argue in favor of its existence in 1920 at the University of Leyden. In the Treatise on Electricity and Magnetism (1873), Maxwell writes: "The problem of determining the mechanism required to establish a given species of connexion between the motions of the parts of a system always admits of an infinite number of solutions."

    In his 1856 memoir regarding Faraday’s so-called lines of force, prior to the working out of his mature position, Maxwell would recommend a flexible methodological attitude as regards electromagnetic commitments. If we adhere only to the mathematical formulation of a theory and bracket its physical implications, then we entirely lose sight of the phenomena to be explained. Conversely, blind commitment to a physical hypothesis can pervert truth. For between any physical analogy and the underlying physical reality it represents, there will always be a degree of conceptual remainder. But although individual analogies were meant by Maxwell to be viewed as illustrative, not as explanatory, he persisted in viewing all energy, literally, and as mechanical in nature.

    André Ampère, whose steady-state law describing the mutual forces of current-carrying wires upon each other was eventually augmented by Maxwell to allow for the moving charges needed for the generation of electromagnetic waves, adopted an instrumentalist attitude toward the existence of underlying electromagnetic causes. Ampère thus was willing to settle for an action-at-a-distance account of electromagnetic phenomena. The problem of theoretically determining whether there exists a medium or mechanical means of continuous transmission for various phenomena had been of concern to electrical and magnetic theory since the time of the Greeks. For Coulomb and others, the inverse square relation governing the gravitational force served as a fruitful analogy for the forces governing electrical phenomena. Additionally, the accompanying Newtonian attitude, concerning the avoidance of unwarranted hypotheses in ascribing physical causes to phenomena such as gravity, provided a sort of theoretical sanction for action-at-a-distance theories. This also would have its impact on succeeding generations of scientists, including Ampère.

    Hans Christian Oersted’s 1819 discovery, however, that a current-carrying wire will cause a magnetic reaction perpendicular to the length of the wire, required a retreat from the action-at-a-distance mentality that had been on the rise since the time of Coulomb’s 1785 demonstration that an inverse square relation described the electrostatic force between separated point charges. In retrospect, with the subsequent theoretical advancement of electromagnetic theory by both Faraday and Maxwell, Ampère’s instrumental confidence in this case turns out to have been premature.

    Unlike the case of the gravitational force, Michael Faraday would argue for the actual physical existence of magnetic lines of force. These forces, when varied, induce electric currents. According to Maxwell, Faraday’s conception of physical lines of force provided the key that would unlock the secrets to the science of electricity. Maxwell proceeded to mathematize Faraday’s conception of lines of force into a fully articulated field theory. This theory, as exemplified in the four famous field equations of Maxwell, which encompassed all the previous advances of electrical and magnetic theory, described all electrical and magnetic phenomena in relation to a field-theoretic framework.

    Ampère’s antirealist attitude concerning the possible causal underpinnings of electromagnetic phenomena thus was no longer tenable. For Maxwell, the electromagnetic field could be identified with the dynamic or energetic conditioning of space. All electrical and magnetic phenomena are transmitted as the result of contiguous modifications of the field. In Maxwell’s mature theory, electrical, magnetic, and optical phenomena are unified under the mathematical formalism and interpretation of the field equations. In the ethereal model the field was designed as an elaborate swirling or vortical apparatus that was sensitive to specific engineering considerations needed for the mechanical propagation of electromagnetic waves such as compressibility and heat loss.

    At the beginning of the nineteenth century, Thomas Young, Augustin Fresnel, and others had already provided classic experimental evidence in favor of the wave nature of light, as previously staked out by Huygens in his Treatise on Light (1690). When it was shown that the wave theory of light could explain polarization if light waves were transverse, meaning that they vibrated at right angles to their direction of motion, the Newtonian paradigm as established in the Opticks (1704), which was commonly credited as holding that light was particulate in nature, had to be abandoned. Maxwell’s theory predicted that the speed of electromagnetic waves is equal to the square root of the ratio of the constants of proportionality appearing in Coulomb’s law and the Biot-Savart law for the magnetic field, respectively. That the magnitude of this speed was numerically equal to the speed of light in a vacuum was added evidence that the speculative view of the German idealist philosopher F. W. Schelling, that all physical forces were emanations of a single underlying force, was not far-fetched. That these speeds were independently obtained lent further credence to the unification of the sciences of optics and electromagnetism.

    Of significance in the evolution of the conviction that electromagnetic phenomena required something more than an action-at-a-distance approach was the discovery of the curvature exhibited by magnetic lines of force by Faraday. It was difficult to conceive of such curvilinear or nonlinear motion without a notion of force that was tied to some physically causal framework that was essentially mechanical in nature. But such a consensus was by no means unanimous. Wilhelm Weber’s electromagnetic theory might still be classified in terms of action-at-a-distance, despite the fact that for Weber electromagnetic transmission was not instantaneous. Oersted’s discovery of the perpendicularity of the force exerted by a current-carrying wire upon a magnetic pole had the same implication as Faraday’s later discovery of circular lines of force, for such perpendicularity also requires an explanation of how such a twisting motion relative to the direction of the current can occur.

    Maxwell’s field equations implied the existence of electric and magnetic waves that are orthogonally oriented to each other as well as to their direction of motion, and which propagate through space at the incredibly fast but finite speed of light. Such waves are the net result of interactions between changing electric and magnetic fields. Hence both the oscillation and the acceleration of electric charges are crucial to the generation of electromagnetic waves.

    The year 1887 was a significant year for electromagnetic theory. At approximately the same time that Michelson and Morley were failing to detect the luminiferous ether, Heinrich Hertz was beginning to provide experimental confirmation of the existence of Maxwell’s electromagnetic waves. He determined that certain oscillating currents generate electromagnetic waves. Such waves, furthermore, possessed all the characteristics of light: reflection, refraction, interference, polarization, finite speed, near equivalent speed, and so on, except for visibility. Moreover, these waves could be distinguished from their electromotive source or cause, for such forces persist in space even after the causes which have given rise to them have disappeared. Indisputably, the experimental results of Hertz inspired a professional confidence toward a realistic interpretation of the electromagnetic waves implied by the field equations of Maxwell. It was only a short time until these Hertzean waves would become better and more popularly known as radio waves.

    The Theory of Relativity

    Maxwell’s theory of electromagnetic phenomena provided a kind of classical closure to the longstanding debate over whether such phenomena were to be viewed as substantial in themselves, as properties of some substance, or as constituents of some kind of energetic process. Einstein’s Special and General Theories of Relativity revisited a metaphysically related problem, that of the well-known Newton-Leibniz debate concerning the substantival reality (Newton) vs. the relationality (Leibniz, Berkeley, Mach) of space and time. It is upon this relativistic theme that we will focus our attention.

    In the theory of Special Relativity (SR), as articulated by Einstein in 1905, time functions as an independent parameter, to be coordinated with space, and not as an integrated feature of space-time. Such is also the case in Newtonian mechanics. Minkowski would later show how SR can be validly reformulated so that space-time is construed as a continuum. Hence the meaning of Minkowski’s declaration that space and time disappear as separate concepts. In Minkowski’s postulation of the Absolute World, space and time can only be conceptually distinguished; in reality, for Minkowski, they are interdependent. Einstein echoes this integrated spatio-temporal perspective in the context of General Relativity (GR) when he states that the objectivity of space and time as independent concepts is stripped of all physical meaning by the principle of general covariance. And yet in extensions of GR, which universalizes the more limited inertial perspective of SR, time and space are sometimes coordinated and sometimes not. Time is linked to spatial hypersurfaces (hyper = more than 3 dimensions) in the standard Friedmann models of GR, whereas time is distinguished from space in the description of the space-time metric given by Robertson-Walker. From an epistemological perspective, in which observation, intuition, evidence, measurement, and proof techniques all play an important role in the analysis and evaluation of space and time, both SR and GR can legitimately be said to have radically altered the Newtonian worldview. From a metaphysical perspective, however, there is still no definitive relativistic verdict as to the reality and constitution of space and time.

    Special Relativity resolves conflicts between the principle of relativity, which imposes a physical equivalence upon all inertial frames of reference, and the Maxwell-Lorentz theory of electrodynamics, by drawing out the implications of the postulate that decrees the constancy of the speed of light. While this leads to contractions of length and dilations of time in accordance with the set of Lorentz transformations, this does not mean that space and time do not and cannot really and physically exist. On the other hand, the relativity of simultaneity which arises in SR does at least point to a definite interdependence between space and time, even though, as previously noted, time still functions as an independent parameter in SR.

    Special Relativity did not necessarily issue from a metaphysical renunciation of spatial and temporal substantivalism (realism) per se, but did at least in part stem from a critique of the implications of the notions of absolute space and time upon which Newtonian mechanics in particular was based. While SR posits that the concept of a luminiferous ether is superfluous, a 1920 address given at the University of Leyden entitled Ether and Relativity clearly articulates Einstein’s realist post-GR inclinations as to the existence, or at least the possibility, of some nonmechanical kind of ethereal substance. In addition, in this address Einstein actually equates such a substance with the Newtonian conception of space. This standing confusion as to Einstein’s stance in reference to the ether and the null result of the Michelson-Morley experiment (1887) brings up an interesting question. Did Einstein’s views on the issues of substantivalism (realism) vs. relationism (antirealism) as concerns space and time in particular, and as concerns realism vs. antirealism as a metaphysical attitude in general, change in any way during the course of his own development as a scientific thinker?

    Various comments of Einstein throughout his life lead one to believe that he had already abandoned the concept of a substantive ethereal medium either before or while working out the details of SR. Einstein’s intellectual debt to Humean skepticism and Machian positivism is well documented. As A. Fine has noted, this debt literally leaps out of the pages of the 1905 paper on special relativity. In SR simultaneity is operationally defined in good positivist fashion. In the aforementioned GR paper, dating from 1916, Einstein’s comments concerning the integration of space and time, which required the elimination of space and time as independent concepts, are laden with verificationist language. And yet we know that Einstein will later defend the realist thesis against Bohr and the entire quantum community in what will become known as the hidden variables debate. Was this the result of some inexplicable conversion to realism, or was it rather the result of Einstein’s owning up to the realist implications of GR, in much the same way that he would eventually be forced to own up to his mistake about having theoretically tampered with GR in order to avoid the implications of a cosmological expansion? And concerning GR, did not Einstein indeed adopt a realist attitude toward the existence of a four-dimensional spacetime manifold?

    Let us now reflect upon SR. In the historical introduction to Part III the real is identified with the relative. For SR it is really the case that such properties as position, speed, length, time, duration, and simultaneity are relative in that they quantitatively vary when measured from different inertial perspectives. It is equally true, however, to claim that in SR the real is to be identified with what is invariant.

    Given a spatio-temporal location for Event1 of (x,y,z,t) and a spatio-temporal location for Event2 of (x′,y′,z′,t′), with c = the speed of light; then according to Minkowski, the space-time separation between these two events, given by {(x-x′)² + (y-y′)² + (z-z′)² − c²(t-t′)²}, is invariant. While such an invariance in no way supports a realist interpretation of space, time, or space-time; nevertheless, it does tend to undercut the claim that SR supports antisubstantivalism or classic Leibnizian relationalism. Substantivalists can claim that Newton’s disentanglement of space from time had indeed been erroneous, and, if a substantivalist commitment to space and time were indeed dependent upon this separation, then it too would be suspect. But Einstein and Minkowski have shown that there is another basis upon which to support the substantivalist thesis that there is something substantial underlying our observations of and interactions with the world. As W. H. Newton-Smith points out, SR is neutral on the issue of realism because substantivalists of the Newtonian variety have been shown by Einstein to have been wrong only on the details of their central thesis. They are not necessarily wrong about the thesis itself.

    Let us now consider GR. Does GR conform to the relationalist or anti-Newtonian principle of Ernst Mach that attributes the inertial forces of bodies to accelerations that are relative to the combined masses of all the stars in the universe? Informed speculation would seem to answer this question in the negative.

    Einstein’s Principle of Equivalence equates acceleration with gravity. The field equations of GR, Einstein’s theory of gravity, still admit of solutions that ascribe an actual structure to space-time in the absence of all existing matter and energy. Massive objects thus modify space-time; they do not define it. The curvature of space-time is the result of mass/energy deformations upon a flat space-time metric. But according to Mach’s principle, which depends upon an existing distribution of matter and energy, such zero matter/energy solutions should be impossible. In 1917 the Dutch scientist Willem de Sitter advanced just such a vacuum solution. In 1949 the logician Kurt Gödel demonstrated a solution of GR in which the universe as a whole undergoes a rotation that can be detected. Such solutions, not all dealing with boundary conditions of zero matter and energy, make it difficult to view GR as in accord with the relationalist principle of Mach. A final verdict on GR in relation to realism has not yet been achieved.

    Quantum Theory

    Quantum considerations have been a part of the theoretical debate over the fundamental character of matter and energy ever since the introduction of the quantum concept by Max Planck in 1900. At the dawn of the Scientific Revolution, in the seventeenth century, the introduction of discrete algebraic techniques to complement the continuous character of geometrical representation, and the introduction of new mathematical insights into the nature of continuity, infinity, and infinitesimals, led to unprecedented advances in physical theory. In the twentieth century the rise of Quantum Theory (QT) shed light upon as well as stimulated skepticism about the physical implications of both discontinuous phenomena and the realm of the infinitesimally small. Whether Bohr’s Principle of Complementarity (1927) signifies something deep about such mutually exclusive but jointly necessary attributes of fundamental reality as particle/wave duality, or whether such exclusivity is primarily a precipitate of notational and conceptual incompatibilities derived from insufficient modes of discourse and description, remains a crucial philosophical problem for QT.

    Quantum Theory represents a greater shock to our classically trained intuitions than even Relativity Theory. It is a common oversimplification in reconstructing the history of QT to pit the realist minority, spearheaded first by Einstein and later by David Bohm, against the instrumentalist majority view represented by the Copenhagen Interpretation espoused by the Bohr contingent. Neither Heisenberg nor Schrödinger, for example, were born to the manor of such uncompromising instrumentalism as QT has since become. Heisenberg argued for something like a Craig-theoretic reduction of the theoretical terms of QT. In the abstract to his seminal 1925 paper, Heisenberg boldly states that what he is seeking is a quantum-theoretical mechanics based exclusively on relations between quantities observable in principle. Schrödinger’s route to wave mechanics was fraught with metaphysical ambivalence. Also of note is the instrumentalist apostasy of de Broglie. His slumbering realist commitments, subsequent to his postulation of a pilot wave (1927) that was plainly at odds with Bohr’s conception of complementarity, were reawakened in 1952 by Bohm’s formulation of a hidden variables approach to QT.

    Such wavering of opinion within QT, moreover, was not limited solely to issues at the metaphysical fringe. Not only are there various rival and semantically nonequivalent interpretations of QT; there is also some doubt regarding the syntactic equivalence of its different formalizations. In 1926 Schrödinger offered a proof of the formal equivalence between his own wave mechanics and the recently invented matrix mechanics of Heisenberg (1925). In 1964, however, P.A.M. Dirac would contend that there is no such equivalence. Such formalistic discrepancies are relevant for those who view a connection between theoretical formalization and interpretation.

    It is also a bit ironic that a strong paradigmatic commitment to instrumentalism within QT should be thought to be consistent with the prediction of unknown particles. Among the most successful of such predictions to date have been Dirac’s prediction of the positron (1928), and the Zweig/Gell-Mann prediction of quarks (1963–64). Although the inequalities of Bell and the experimental confirmations of Aspect et al. seem to have finally overcome the persistent attempts of Einstein to salvage quantum realism, QT still has battles to face on the ontological horizon. Advocates of Super String Theory, for example, are cautiously optimistic about the possibility that the theoretical hegemony of the Standard Model of QT will someday be completely assimilated by a Final Theory of Everything.

    The Copenhagen Interpretation of QT purports to pass a sentence of death on classical ontology. It asserts the thesis that QT is both consistent and complete as formulated. The wave function (Ψ), which is calculated by means of the wave equation of Schrödinger, contains all the information that can in principle be known about the state of a system. We thus cannot hope, even in principle, to penetrate more deeply into the fabric of reality. Essentially, there are no hidden variables that would enable us to determinately specify and/or further unravel the secrets underlying such quantum features as probabilistic predictions (Ψ²); superpositions of quantum states; uncertainty relations between certain conjugate pairs (position/momentum; time/energy); complementarity (particle/wave duality); and non-local interactions (Bell’s theorem). Advocates of the Copenhagen Interpretation of QT regard Humean metaphysical skepticism as fully vindicated in that our intuitions of causality are not substantiated by quantum reality. Moreover, even axioms of logic and mathematics, such as the Law of the Excluded Middle and the principle of commutativity, are put on notice by the counterintuitive results of the quantum world.

    Dirac, in The Principles of Quantum Mechanics, homes in on the quantum problem of size. What is large and what is small? Dirac suggests a definition of size that can be classified as a kind of quantum operationalism. Thus smallness, for Dirac, is an absolute concept that characterizes contexts and conditions in which the Uncertainty relations are not measurably negligible. Our classical intuitions, however, based upon macroscopic perceptions, are negligibly impacted by quantum Uncertainty. This has led to a top-down reductionism that explains the perceptually accessible large or macroscopic in terms of the perceptually inaccessible small or microscopic. But unlike the subalternation relation of classical logic, the truth about physical reality does not flow downward. It is incorrect to assume that microscopic determinism follows from macroscopic determinism. Quantum Theory asserts that the reverse is true. Uncertainties at the quantum level are transmitted upward to the realm of our everyday perceptions, where they are about as noticeable to our common sense intuition as the gaps between our Newtonian expectations and the corrections of SR. This is why it is true to say that the Uncertainty relations are metaphysically intrinsic to physical reality. We cannot get beyond them based upon a notion of epistemological limitation that can in principle be overcome with extended technological proficiency to probe into the depths of the quantum realm.

    John von Neumann showed how the Heisenberg Uncertainty relations alone were sufficient for deriving all of QT. Uncertainty stipulates that the act of observing, or interacting with, or measuring some physical process prevents us from ever achieving an arbitrarily high degree of precision about certain phenomena, properties, or parameters of reality. Philosophically, however, we can say that Einstein-Podolsky-Rosen (EPR) were correct in 1935 in pointing out the unlicensed leap of faith embraced by advocates of the Copenhagen Interpretation. Why are Heisenberg’s Uncertainty relations a universal limitation on all our knowledge? Why must EPR have been generally wrong, and the Copenhagen Interpretation generally correct, when their battle over the issue of locality and Uncertainty was fought over a thought experiment dealing with electron spin in particular? Philosophically, it is not unfair to say that the Copenhagen Interpretation gained in legitimacy as a result of the efforts of Bell and Aspect, but it had not yet earned its status as a quantum dogma to be defended from the very infancy of QT. A reverse parallel can be drawn here with the case of the philosophical debate over the Heliocentric Theory between Galileo and Bellarmine. Galilean realism was not as entitled to certitude in 1615 as it would later be after Newton. Likewise, Copenhagen instrumentalism became much more philosophically defensible with the outcome of Bell’s work, but not so completely as in the case of the defensibility of the post-Newtonian realist attitude toward the Heliocentric Theory. In terms of a possible historical reconstruction, as articulated by physicist J. Cushing, Bohm’s realist construal of QT need not have been a reaction to the Copenhagen Interpretation. Bohm’s hidden variables approach could easily have been formulated during the late 1920s. Its main drawback, that of violating the limit on the speed of light imposed by SR, presented a different kind of problem.

    Let us now reflect upon the classic Gedanken challenge to the Copenhagen Interpretation put forward by Schrödinger in 1935: the famous cat paradox. The crux of the problem to instrumentalism and the completeness of QT revolves around the absurdity that a mutually exclusive superposition of possible outcomes, such as opposite spin orientations or the status of a cat as alive or dead, can be simultaneously true to varying degrees until some measurement or observation collapses the relevant wave function in favor of one alternative or another. Realists want to know how this discontinuous transition is brought about by the act of measurement.

    Schrödinger’s experiment draws upon the analogy between our intuition that a cat must be either dead or alive, and that an electron, for example, cannot be spinning in two different directions simultaneously. Prior to measurement, then, how can we falsify the contention of Copenhagen QT that a cat sequestered in a sealed metal box has a probability of being both dead and alive until we check upon its status? Likewise, how is it that an as-yet undetected electron is spinning both upward and downward with equal likelihood?

    This quantum theoretical paradox illustrates the difficulty in achieving an understanding of the proper relationship between truth and knowledge. Truth is customarily held to be a necessary but not sufficient condition for knowledge claims in epistemological theory. Hence if we know something, then it must be true. But if something is true we still might not know it. This primacy of the epistemological perspective seems to parallel the primacy of the role accorded to measurement and observation in the assignment of a distinct value, whose metaphysical reality we come to know as true only after the collapse of a particular wave function. It has always been the case in philosophy to note the gap between what we know and how things are. Quantum Theory introduces a new twist in touting an (in principle) unbridgeable gap between our knowledge of a system and its actual physical state and degree of internal interconnectedness.

    The stress placed by the Schrödinger paradox of a live/dead cat upon the instrumentalism and the completeness of Copenhagen QT has to do with its focus upon the discrete or binary character of existence, which must have either one of two distinct options. This presents a problem in mapping such a discontinuous feature of reality upon a continuous probability function. Richard Feynman’s famous sum-over-paths approach to QT, which allows for the infinite number of trajectories that an electron might take during a double-slit experiment, also violates what we would normally construe as the physical impossibility of an electron being in more than one place at the same time.

    What this signifies is that science, in the case of QT, has learned to value prediction even in the absence of further explanation. In Aristotelian science the exact opposite was true. In the Newtonian paradigm both explanation and prediction were crucial to theoretical success. The logic of modern explanation theory is complicated by its need to be sensitive to probabilities. With the rise of Copenhagen instrumentalism we witness, not so much a sanction for sustained metaphysical skepticism, but the discovery of a new respect for inexplicable predictions.

    Big Bang Theory

    Our Standard Model of Big Bang Theory (BBT) traces our knowledge of the origin and development of the universe back to the Planck Time, 10−43 seconds after the primeval explosion which lends its name to the theory. The precise moment of the Big Bang itself is not known. It could be determined if we could calculate the exact magnitudes of such parameters as Hubble’s constant (H), which gives a measure of the present expansion rate of the universe, and the cosmological constant (Λ), which represents the remaining vacuum energy from the inflationary period. A value of (Λ > 0), for instance, would allow for an increase in the expansion rate of the universe, since (Λ) exerts a repulsion that counteracts gravity. But this gap in our knowledge of when the Big Bang occurred does not undermine the mountain of evidence in favor of the claim that it did occur. With our other physical theories, the core ontological features brought us into contact with such issues as configuration, size, observability, process, substantiality, relationality, and so on. But with BBT the core existential issue is that of a primal event.

    Theoretical Support for BBT

    1.  BBT is implied by GR

    2.  Discovery of the expanding universe

    3.  Discovery of Cosmic Background Radiation (CBR)

    4.  Abundance of light elements in the early universe. Current Mass: (Helium/Hydrogen) = ¼

    5.  Guth’s Inflationary hypothesis, which explains

    a.  Horizon problem

    b.  Flatness problem

    c.  Absence of magnetic monopoles

    6.  COBE confirmation of CBR anisotropy

    Our confidence in BBT clashes with Steven Weinberg’s candid remark as to whether we really know what we are talking about in regard to the physics of the early universe. Nevertheless, the discovery of the CBR by Penzias and Wilson in 1965 is the critical factor that elevates BBT over the Steady State Theory (SST) of Hoyle, Bondi, and Gold. Why does this discovery, and not that of the expanding universe or GR, throw the weight of evidence in favor of BBT? This redounds to the question of why the Hubble confirmation of an expanding universe in 1929, as implied by GR, was deemed by some as insufficient for deciding between BBT and SST, whereas the discovery of the existence of a uniform microwave background radiation of 2.74K bombarding the earth from all directions was viewed as sufficient. Why is CBR often poetically referred to as the echo of creation, whereas some consider an expanding universe as equally consistent with an eternalist model of the universe?

    By extrapolation of the Hubble equation linking velocity of stellar recession with observed distance from the earth (V = HD), we arrive at a first moment of time in which all matter is concentrated into a very small volume. A thermodynamic extrapolation in relation to the CBR also can be said to converge upon a common point of origin. Thus we can say that CBR lent support to a major inference of an expanding universe.

    Steady State Theory posits that the engine of cosmic expansion is not the energy unleashed at the beginning of time at the moment of the Big Bang, but an eternal process whereby a negative energy field of some sort is converted into new matter. Such a continuous process of material creation is intended to avert the thermodynamic heat death that would be expected in view of the second law of thermodynamics. In this manner, the universe retains a constant density and appearance over time. There is no logical contradiction involved in SST. Its initial loss of favor had more to do with the corroboration that CBR and the current mass of helium and hydrogen provided GR and the expanding universe than with any falsification of SST per se. The current mass ratio of (helium/hydrogen = ¼) is the very result we would expect from the type of hot early universe environment predicted by BBT. Therefore, with no independent corroboration of its own, SST fell out of favor, and BBT became established as the lone, but still insecure, paradigm of cosmological theory.

    The elevation of BBT over SST did not mean that BBT had overcome all explanatory problems. It had not. To view BBT and SST as the only two cosmological possibilities would be to impose a false dichotomy, although even prescientific speculation was divided between creationist and eternalist models of the universe. And even if we grant that BBT must be correct, there are still a host of different Big Bang models to choose from.

    As our outline indicates, the Inflationary hypothesis, as articulated by Guth, Steinhardt, and Linde et al. from the early 1980s onward, was successful in explaining a number of important problems, all of which were troublesome for BBT. But none of these Inflation based explanations would be worth anything if the legitimacy of the Inflationary hypothesis itself was undermined. According to Inflation, the CBR must possess the signature of radiation irregularities that would be reflective of the possibility of future structure-forming in the universe. This would only be the case if CBR could be shown to possess sufficient density variation or anisotropy in its signal. According to the COBE findings of Smoot et al. in 1992, the CBR exhibits a sufficient degree of variation in its thermodynamic profile to satisfy this Inflationary requirement.

    What is the connection between the Inflationary hypothesis and what F. Zwicky once referred to as the missing dunkle Materie, or dark matter? Let us first define these terms.

    Inflation was introduced as a conjecture that could explain certain still-unanswered puzzles of BBT. Three such problems are enumerated in the outline above. Inflation not only explains the so-called flatness problem of BBT; by virtue of its exponential stretching out of any initial spatial curvature, it also predicts it. So while the preceding supports for BBT in the outline point toward the actuality of a Big Bang moment, there was a problem as to how known physics (GR and QT) and unificationist physics (Grand Unified Theories [GUT]) could explain the current state of the universe (horizon problem, absence of magnetic monopoles, large scale structure, etc.). Thus, in general terms, an expansionist period was invoked to salvage BBT. But we still do not know the precise mechanism whereby such an expansion is accomplished. While explaining the horizon problem, the seeming coincidence of widely separated homogeneous regions of space not linked by light signals, we find that inflation creates a problem for itself. How can expansion occur at such a rate so as to disconnect regions of space that were once previously connected by light signals?

    One suggestion for the Inflationary hypothesis compares the onset of Inflation to a GUT-scale symmetry-breaking phase transition. In such a scenario there might be a disruption of the opposing tension between an entropy field featuring a kinetic energy density, which enables expansion and cooling, and a vacuum energy density, which opposes it. Likewise, termination of the Inflationary period might occur when the symmetry that holds between the weak and electromagnetic forces is broken. Simply put, when the universe no longer behaves as if the stress-energy tensor is controlled by the cosmological constant (Λ), it then reverts to normal pre-inflationary expansion.

    All this is obviously hypothetical. Further complications arise when it is realized that a BBT that includes Inflation yields a Robertson-Walker curvature term equal to zero. Such a state for the space-time metric coincides with a value of Omega (Ω) = 1. Since Ω = p/pc, where p equals the actual mass density of the universe and pc equals the critical density between expansion and contraction, this leads to the Inflationary prediction that the universe is dominated by some form of dark matter. Otherwise Ω ≠ 1. Thus Inflation is consistent with the findings of Ostriker, Peebles, Rubin et al., which showed that the explanation of galactic stability requires an invisible gravitational source, perhaps comprising more than 90 percent of the mass of the universe.

    Our evidence for the existence of dark matter is indirect. Given that its existence is predicted by BBT + Inflation, its detection is vital for both, as far as our outline-based line of argumentation, which stresses the dependency of BBT upon Inflation, is concerned. Our current evidence seems to support a value of Ω ≤ 1. If Ω < 1, then the Inflationary requirement of a flat space will require that (Λ > o). Furthermore, there is conflicting evidence as to the age of the universe: sometimes (AgeHubble < Agestars). In terms of the parameters (H, Ω, Λ) and the estimated ages of globular clusters, there are five major BBT models that are consistent with current data, at least two of which are inconsistent with Inflation. Add to this that there are various Inflationary alternatives for which space-time is curved, and we must conclude that although (BBT) is currently paradigmatic for cosmological theorizing, its details are by no means theoretically secured.

    Solving the problem of detecting the missing mass of the universe constitutes possibly the biggest challenge facing BBT today. There are many candidates for the mysterious dark matter, but only two major categories: baryonic (which includes protons and neutrons) and nonbaryonic. On the basis of the CBR the mass of the universe was believed to have been predominantly baryonic. With Inflation this inference has been seriously weakened. If baryonic, then dark matter is at least in principle accessible to telescopic discovery. If nonbaryonic, the composition of dark matter will need to be ascertained by experimentation at extremely high energies here on earth.

    The Problem of Scientific Discovery

    Ever since Plato’s Meno, epistemological theory has come to differentiate between discovery and invention. Are scientific hypotheses and theories discovered or invented? Additionally, is there such a thing as a logic of discovery? And if it is true, as Karl Popper vigorously argued, that there is no such thing as a method or general algorithm of discovery, then surely there exist various pragmatic tools that serve as catalysts of discovery. It is in this latter sense that we refer to the existence not of a logic of discovery, but of a logistics of discovery.

    Bacon, Descartes, Galileo, and Newton were pioneers of the scientific method who would be among the first to admit that we cannot lay down sufficient and necessary rules that would enable us to automatically solve a scientific problem. Twentieth-century phenomenology has tried to delve beneath the surface of psychologistic cognition to a logical core of concepts that govern rational thinking, but has done so at the expense of a proliferation of the logical distinctions upon which such thinking is based. If we accept the inevitable problematics involved in the possibility of devising a general rule of procedure for theory construction, then we can focus our attention upon those pragmatic measures that have assisted scientists in the creative development of successful scientific theories.

    Neither inductive logics, such as those formulated by Carnap and Reichenbach, nor the so-called Hypothetico-Deductive method, in which hypotheses both serve as the premises for previously obtained results and as a source of new experimental predictions, describe actual scientific procedure. Even the famous experimentum crucis of Baconian invention does not serve a purely discovery-oriented function, if one considers the goals of experimental corroboration and falsification as not being included within the purview of discovery as an originating process. When existing hypotheses or theories are put to the test, then the critical experiments that are designed to test them can be characterized more in terms of a contribution to the completeness of a hypothesis or theory than as a creative component of theory construction itself. On the other hand, experimental results, sometimes serendipitous, are crucial for the creative work of theory construction. Nevertheless, it is a philosophical difficulty as to where to draw the line of demarcation between the theoretical phases of discovery and attempted corroboration. It is equally difficult to establish the mutual relationship between theory and experiment.

    Certainly the experimental researches of Faraday and Rutherford were crucial for the formation of Maxwell’s Electromagnetic Theory (1865) and Bohr’s composite Classical/Quantum theory of the atom (1913), respectively. The failure of Michelson/Morley (1887) to discover the luminiferous ether was significant but not crucial to the development of SR. Eddington’s celebrated expedition to photograph a solar eclipse in 1919 led to the spectacular corroboration of a core prediction of GR. The experimental results of Compton and Davisson/Germer in the 1920’s were pivotal in corroborating the experimentally irreducible complementarity of electromagnetic phenomena. Bell’s thought-experimental approach, later experimentally confirmed by Aspect et al., was devised with the original intent of vindicating Einstein’s realist intuitions as to the proper interpretation of QT. Ironically, his work led to a corroboration of the instrumentalist Copenhagen Interpretation of QT. The double-slit experiment has performed double service, as a critical experiment in the development of both Electromagnetic and Quantum Theory. Interestingly, Feynman went so far as to describe the double-slit experiment as the defining experiment for at least attempting to understand the complexities, implications, and absurdities of QT. Simply, then, experiment has played and continues to play not only an important role in the overall work of scientific discovery, but one that is multidimensional.

    Let us now focus upon the use of analogy as a tool of discovery. In logical discourse analogy is described as a species of inductive thinking, involving the formulation of a general correspondence between two conceptual structures in which there is always a degree of conceptual remainder. In requiring an inductivist leap of faith concerning all physical instantiations of a particular concept, or complex conceptual model, analogical thinking can often be fallacious. In the history of scientific discovery the use of analogy has also been extremely fruitful.

    Our references to analogical thinking in the history of science here will not aspire to any rigorous definitional standard, for it is not in the nature of analogical thinking to be held up to any such standard of rigor. In current QT, hypotheses are said to be renormalizable if they can be described using Feynman diagrams. If they are not so renormalizable, they will often be rejected, or looked upon with a lesser degree of confidence, for this reason alone. Such a diagrammatic approach to scientific thinking qualifies as analogical. If we view mathematical relationships and structures as analogs of those that occur in nature, then Schrödinger’s attempt to eliminate the negative energy solutions in Dirac’s electron theory for failing to conform to Lorentzian invariance is cut from the same cloth. N. R. Campbell, in Physics, the Elements (1920), argued that there exists a necessary connection between scientific theories and models, which for our purposes, unlike the distinction made between models and analogies by someone like Pierre Duhem, will be viewed as being continuous with analogical comparison. Einstein’s Principle of Equivalence, in which acceleration and gravity are treated as mathematically interchangeable, represents a stunning insight into the analogous character of the forces exhibited in nature. Newton’s Law of Universal Gravitation provided the analog for the Electrostatic Law discovered by Coulomb, while Newtonian/Hamiltonian mechanics performed the same function in regard to the formulation of both matrix and wave mechanics in the 1920s. Huygens developed a wave theory of light, as did both Grimaldi and Hooke, from the prevailing view that sound is a wave phenomenon. Many problems in applied acoustics have been solved as a result of the analogous connection between acoustics and preexisting electric circuit theory. Both Einstein and Perrin extended the analogy between ideal gases and solutions to colloidal solutions and suspensions. Among the most celebrated examples of analogical induction in the history of modern science was Kepler’s generalization that an elliptical orbit for Mars also applied to the other planets in his Astronomia Nova (1609).

    The mathematization of physics in the seventeenth century made it possible for mathematical considerations to prove fruitful for physical theory. Planck’s fortunate employment of a curve-fitting technique while working out the details of black-body radiation, by which he found a formula that fit all his experimental observations, serves as a case in point—although, as Edward Witten points out, a current disadvantage of Super String Theory is that it possesses no guiding physical intuition analogous to that of Einstein’s Principle of Equivalence. And while it is doubtful that Maxwell was led by mainly mathematical considerations in his reflections regarding displacement current, one may nevertheless reconstruct such a possible path. In the case of Heisenberg in the mid 1920s, for example, his publications betray no hint of the scientific models and analogies that were instrumental in guiding his thinking. Unlike Kepler, he presented his results in a form that bore no signs of the travails of its birth. Newton did much the same thing in the Principia in covering up the intellectual path he trod toward his principle of inertia. To recover this path we must consult De Motu. In any case, analogy has played a crucial role in the forging of physical theory.

    The Evaluation of Scientific Theories

    The efficacy of a theory is related to its explanatory scope and to its predictive accuracy. As Leibniz conveys in his correspondence, nothing commends a hypothesis more to working scientists than its predictive success. We have seen, however, that classical deterministic and reductionistic explanation may not be possible, even in principle, for QT. Nevertheless, when a theoretical prediction is corroborated, experimentally or otherwise, this can only bolster our confidence in such a theory. When phenomena within the purview of a particular theory resist explanation, or do not conform to theoretical expectations, then our confidence in such a theory begins to wane. How many disconfirming instances or explanatory anomalies suffice to warrant a theoretical overhaul or replacement is not something known with any degree of certainty, but differs from theory to theory.

    If a single counterexample sufficed in common scientific praxis to undo a successful theory, then Newtonian physical theory would have been more seriously jeopardized by the aberration in the orbit of the planet Mercury at perihelion. General Relativity recorded its first success by explaining this orbital anomaly, but we still refer to Newtonian theory as having been assimilated by Relativity Theory. It lives on as an approximation. We do not speak of Newtonian theory as having been refuted. That a seemingly fruitful theoretical framework will dig in its heels despite immediate and serious shortcomings is well exemplified by the persisting influence of Bohr’s original theory of the atom after 1913. What is important, as Popper points out, is that a theory be falsifiable in principle. It must be able to be proven wrong, unlike a psychological theory such as psychoanalysis, which many view as interpretively closed, meaning that it can explanatorily counter any external criticism to its theoretical claims. But what happens to a falsifiable theory when confronted with a counterexample to its claims will depend upon a number of different things.

    The problematic of the transition from one theory to another has become an important area of inquiry within the philosophy of science over the last generation or so. Are such transitions better described as continuous or discontinuous in character? Are they more revolutionary than reformatory? It is interesting in this regard to note that at the dawn of modernity we refer to a revolution in science, but a reformation in religion. The main tenets of the Christian religion remained essentially the same, while the traditional claims of Ptolemaic astronomy and Aristotelian science were superseded entirely by different and undeniably better theories.

    Why is it that successor theories (T2) are superior to the theories they replace (T1)? A continuing goal of the philosophy of science has been to flesh out just what such superiority entails. Let us consider two examples borrowed from L. Laudan.

    1.  Newtonian celestial mechanics (T2) did not explain why the planets all revolved in the same direction around the sun. Cartesian cosmological theory (T1) explained this orbital similarity as the result of the planets being carried by a swirling vortex extending outward from the sun. Thus T2, although a better theory than T1, did not completely prove itself as explanatorily complete.

    2.  Benjamin Franklin’s one-fluid theory could not account for the repulsion exhibited between bodies possessing a negative charge. Earlier in the 1740s the Abbe Nollet had explained this repulsion in terms of an electrical vortex. Despite its explanatory omission, the Franklin theory succeeded in establishing itself over the theory of Nollet.

    Both examples illustrate that the theory that wins out over its rival need not be explanatorily complete relative to its rival in every respect. The decision to transfer scientific allegiance from some T1 to some T2 is thus not altogether based upon an itemized notion of superiority, where T2 must be superior to T1 on an item-by-item basis. Hence, if we were ever to devise a method for choosing between two or more competing theories, we would have to be able to precisely calibrate the importance and relevance of numerous theoretical elements. In the history of the competition between the Heliocentric and Geocentric Theories, for example, it seems that the relevance of the failure to detect stellar parallax became less important as time went by. At the time of Aristarchus such a failure was a highly relevant piece of falsifying ammunition for proponents of geocentrism. By the time of Bessel’s detection of stellar parallax in 1838, no advocate of the well-entrenched Heliocentric Theory saw any particular reason to be jubilant over this long-awaited corroborative piece of information.

    It is interesting to note a particular asymmetry as well as a possible point of symmetry between the first and the last of our five physical theories. Heliocentric Theory acted as a catalyst for overthrowing the hegemony of Aristotelian science, thus leading to the new science created by Newton and the other giants upon whose shoulders he stood. BBT, on the other hand, seemed to usher forth from GR, like Athena from the head of Zeus. Nowadays, both GR and QT are involved in the process of deepening our knowledge about the Big Bang event and the cosmological process that ensued from it. This differing relationship with reference to physical theory represents a point of asymmetry between these two theories. From the point of view of a possible symmetry between these two cosmological theories we can point to the fact that contemporary cosmology provides us with a laboratory for exploring and coming to understand the physics of unification. Super String theorists, for example, evaluate cosmological conditions in their attempt to forge an adequate theoretical framework that can overcome the failures of GR and QT to cope with all the requirements of a comprehensive physical theory. Hence, like Heliocentric Theory, BBT also can be said to function as both a stimulus and sponsor for new insights into physical theory.

    Just over half a century ago, Carl Hempel, Paul Oppenheim, and Hans Reichenbach et al. commented upon the structural similarity between explanation and prediction. While explanation focuses upon past occurrences, prediction is directed toward the future. A fully adequate explanation is one that might also have led to a successful prediction of some phenomenon if it were not already known. Recall that the Inflationary hypothesis of BBT is said to both explain and predict the flatness of the curvature of space. This structural similarity between explanation and prediction is predicated upon similar inferential connections between what is to be explained (explanandum), or predicted, and the set of premises and laws (explanans) that carry the burden of explanation, or prediction. For Hempelian explanation, logical inference is analogous to physical causation. Hempel’s so-called Cumulative model of theory change, in which scientific progress is extended in a linear way as the result of the continuous accumulation of new knowledge, is rooted in his understanding of the structure of explanation. The progressive character of scientific knowledge is the result of the ongoing concatenation of causally related explanans and explananda.

    There are various other models of theory change that oppose the legitimacy of the cumulative model of theoretical development. The focal point of such opposition is the logical structure of the transition that occurs when one theory is superseded by another. A further problem is that of the possible incommensurability between a predecessor and a successor theory. Such a problem arises, for example, when individuals are speaking different languages. Communication becomes impossible without a means of translation. For example, someone reading a history of modern physics would be both perplexed and misinformed if the term electron is not precisely defined. The reader might reasonably wonder whether such a reference is to the electron of any number of different individuals: Thomson, Lorentz, Bohr, Millikan, Dirac, and so on.

    Thomas Kuhn’s theory of the structure of scientific revolutions lends its name to the so-called revolutionary model of theoretical development. Kuhn’s model stresses the discontinuity or incommensurability between T1 and T2. Popper’s evolutionary model, in which inherited theories (T1) must somehow attempt to absorb shocks due to theoretically intractable or anomalous phenomena, is another well known model of theory change. Laudan’s gradualist model of theoretical development suggests that it is wrong to limit ourselves to shifts in the theoretical content of scientific theories alone when assessing the problem of theory change. For it is possible that theory changes might not be accompanied by corresponding changes in the methodology and/or the goals of science. We

    Enjoying the preview?
    Page 1 of 1