Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition
Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition
Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition
Ebook458 pages6 hours

Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

The classic work on qualitative methods in political science

Designing Social Inquiry presents a unified approach to qualitative and quantitative research in political science, showing how the same logic of inference underlies both. This stimulating book discusses issues related to framing research questions, measuring the accuracy of data and the uncertainty of empirical inferences, discovering causal effects, and getting the most out of qualitative research. It addresses topics such as interpretation and inference, comparative case studies, constructing causal theories, dependent and explanatory variables, the limits of random selection, selection bias, and errors in measurement. The book only uses mathematical notation to clarify concepts, and assumes no prior knowledge of mathematics or statistics.

Featuring a new preface by Robert O. Keohane and Gary King, this edition makes an influential work available to new generations of qualitative researchers in the social sciences.

LanguageEnglish
Release dateAug 17, 2021
ISBN9780691224640
Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition
Author

Gary King

Gary King was a professor of psychology at Rose State College in Midwest City, Oklahoma, for thirty-five years before his retirement in 2007. He has published articles in several journals and papers, as well as short stories and a non-fiction piece in the United States Golf Association's Golf Journal. His first book, "An Autumn Remembered: Bud Wilkinson's Legendary '56 Sooners," was published by the University of Oklahoma Press in 2006. He currently resides in Norman, Oklahoma, with his wife Patricia.

Read more from Gary King

Related to Designing Social Inquiry

Related ebooks

Social Science For You

View More

Related articles

Reviews for Designing Social Inquiry

Rating: 4.1666665 out of 5 stars
4/5

15 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Designing Social Inquiry - Gary King

    PREFACE TO THE 2021 EDITION

    Designing Social Inquiries

    K and K on KKV

    We are grateful for all we have learned from the many discussions our book seems to have sparked, the attention it has received, and even the name chosen for it by its readers. When we wrote KKV, a common question in the hallways of political science departments was whether the field would divide into separate quantitative and qualitative disciplines. The methodological wars that began in the 1960s were coming to a head, and KKV was interpreted as a common playing field on which to debate or to debate over. At the time, the potential division of the discipline seemed to make sense to many because the two camps had so little to do with one another.

    Much quantitative research was aspirational at best, often contributing little knowledge according to qualitative researchers. Have you seen the old studies trying to predict the onset of war using, say, three or four relatively uninformative variables, like population size and contiguity between pairs of countries? These quantifications may have been a good place to start, but they did not teach us much about the inner workings of international politics. At the same time, the vast qualitative case study literature was viewed by quantitative scholars as nongeneralizable and incomparable across studies and countries. In their view, qualitative scholarship seemed to follow whatever rules authors felt like implementing or invented on the fly to conveniently justify their own substantive arguments. Qualitative research had few agreed-upon standards of evidence and no common language to describe or evaluate research designs. Of course, many of the criticisms of each side by the other were accurate.

    K, K, and V hailed from very different traditions within the discipline, but we were united by the study of government and politics. We were political scientists. We have since lost our friend and co-author Sid Verba, but he would have happily joined us in saying that we are all proud to be political scientists. That common identity and unified goal of learning about the political, social, economic, and cultural worlds brought us together, generated much of what we liked in the book, and accounts for much of the progress this thriving discipline has experienced over the last quarter century.

    Political science is a diverse subject, retaining some of the vestiges of a bygone era when political science, economics, history, and sociology formed what was close to a single scholarly field. Our subject is highly diverse, with politics existing everywhere humans are found, and the discipline benefits by reflecting this reality in its substantive foci, theories, methods, training, and especially types of data collection. The intellectual and methodological diversity of political science is uncommon among disciplines. Members in good standing in major political science departments include those trained in, who have joint appointments in, or could easily teach in faculties of political science, public policy, public administration, urban studies, sociology, statistics, data science, computer science, law, economics, anthropology, philosophy, public health, medicine, education, biology, psychology, ethnic studies, gender studies, regional and area studies, among others. Such an expanse is rare in academic fields.

    Diversity has advantages, but it also has costs. If you send two students to graduate school in economics halfway across the world, they are likely to take the same courses in their first year. If you send two students off to the same political science graduate department in the United States, they will likely take different courses. We wrote KKV with the understanding that political science makes strange bedfellows—for perfectly good reasons—but that our diversity as a field could inhibit mutual understanding without a common language. The composition of our authorial team exemplified this attraction of opposites and also required us to engage in intense conversation and mutual learning since we did not begin with identical assumptions. Sidney Verba was a public opinion researcher, using mostly quantitative data. Gary King develops new quantitative methodologies to learn about legislative redistricting, voting behavior, international relations, comparative politics, and other areas. And Robert O. Keohane writes books with almost entirely qualitative evidence to make sense of world politics and how it changes under the impact of globalization and the functions that multilateral institutions perform. We embraced this diversity, but we also sought to introduce greater coherence into the field and to promote communication within it.

    In the early 1990s, quantitative political methodology was making spectacular progress—creating novel methods designed specifically to solve political science puzzles and analyze our often unique data types. Its leading scholars had also started to make contributions to the methodological subfields of cognate disciplines, as well as statistics, computer science, and data science. But despite this progress, quantitative political methodology for the most part had not taken qualitative scholarship seriously, or vice versa. If your data were quantitative, you were required by your discussants, colleagues, professors, or reviewers to follow clear standards and the logic of inference, and you were obligated to learn and apply the best methods as they were developed. Yet, if you were a qualitative scholar, the absence of agreed-upon rules or theories of inference, the necessity of following the logic of inference or even a common language, sometimes made it seem like anything goes.

    We hoped to change this, to make qualitative scholars vulnerable to appropriate criticism by quantitative scholars and quantitative scholars vulnerable to criticism by qualitative scholars—with a common basis for criticism. Our main point was that we could best learn with serious discipline-wide engagement if everyone had different styles of research but one logic of inference.

    With this as the state of play, we wrote KKV to make four arguments. First, political science is one discipline, and it should stay that way because core questions of power and governance unite the field. Second, political science (and the social sciences more broadly) has many styles of data collection—quantitative, qualitative, and others—each valuable in its own way, each picking up on different aspects of the social world. The more styles of data collection, and the more we learn, the better it is for the field. Third, despite the diversity in styles and data types, a single logic of inference applies to all of them, with no fundamental differences between the standards for quantitative and qualitative research. Fourth, this unified logic of inference is not a quantitative or a qualitative logic; it is separate and abstracted from any one style of research. As a result, no style of data collection can have hegemony over another. The underlying theory of inference may have been clearer in quantitative research and at the time was more familiar to quantitative analysts, but the logic of inference cannot be used to favor any one style of data collection.

    These four arguments constituted most of the content of our book, which we used to make a wide range of specific methodological suggestions and which we argued were useful for learning about the world and improving empirical research projects. This was our main goal—to improve the practice of empirical social science in dissertations, papers, articles, books, and talks.

    So where are we today, a quarter century since KKV and the large vibrant literature that followed? The evidence seems to support five conclusions.

    First, few if any of our specific methodological suggestions have been explicitly contested and, although we know better than to make causal claims without identifying assumptions, qualitative scholars no longer seem to be acting as if they believe that anything goes. The use of a standard language of inference—largely the same language of inference as in quantitative research—has massively increased. Omitted variable bias, measurement error, selection bias, causal inference, etc., are now easily communicated to social scientists regardless of the type of data they collect and are used by quantitative and qualitative scholars alike. (See the appendix at the end of this preface for a systematic study, using quantitative and qualitative evidence, supporting these claims.)

    Second, and relatedly, our claim about a single logic of inference has been widely adopted.

    Third, a vibrant debate—that we did not anticipate—has ensued from publication until today over whether KKV emphasized the right points and the best qualitative examples; whether each type of research received the respect it deserves; whether some methodological advice that we did not cover should have been included; and whether we overemphasized causal inference, historical versus participatory research, or the reverse, etc. This debate has been productive, even though aspects amount to identity politics about data collection styles (which, as political scientists, we should understand better than anyone is not an insult, but an essential part of human social understanding). We also did not anticipate that various disciplines beyond political science would have joined in this discussion, but it has been fascinating to see their perspectives and engage with them more broadly. This has been a useful debate that helps us all understand how to engage diverse academic audiences; we hope it will continue indefinitely.

    Fourth, many more researchers are crossing the quantitative-qualitative divide. Bridges are being built. Researchers are working together, helping each other. Qualitative researchers come to quantitative researchers for help with research design; quantitative scholars show up at the doors of qualitative researchers to learn about their areas, to elicit priors, or to make sense of their own statistical results. As important, quantitative researchers now think of traditional qualitative information as novel data types, including audio, video, raw text from field notes, archival information, among others. Quantitative researchers have developed steadily improving methods that can produce actionable insights from these data types. The resulting knowledge produced by these analyses is now sufficiently informative, precise, and general to provide considerable value for qualitative research. As a result, researchers from both camps now work together much more frequently, retaining their own styles of data collection but with a common language used in pursuit of common goals.

    Of course, the quantitative-qualitative divide will probably never vanish completely, as specialization has its advantages. This is why the divide, or at least the data types, can be seen in some form in every academic, professional, and commercial field—from what is called evidence-based versus clinical medicine, to empirical research versus jurisprudence in the law, to many others. In political science, we are specializing at the same rate as others, but the nature of our discipline means that more of us tap into the advantages of all types of empirical political science rather than only our own specialty. More now acknowledge that all research is qualitative and a (growing) proportion is also quantitative. We are proud to see that the two camps are working together more than ever in the history of the discipline, and as a result the future is bright.

    Fifth, hallway conversations now rarely include talk of dividing the political science discipline on a quantitative-qualitative fault line. There are other fault lines at greater risk of slipping (such as the unfortunate increasing separation between much political philosophy and political science), but this one seems to have been bridged well enough to be durable. To us, this is a sign of political science growing up as a scientific discipline. Science, of course, is not about acting scientifically, or following the specific rules in KKV. It is instead about the community of scholars acting in competition and cooperation in pursuit of the same goals and adhering to common principles of knowledge creation and knowledge testing. Progress requires the special social organization known as the scientific community, which is built on the insight that it is easier to fool yourself than someone else. A scientific community therefore requires mechanisms such as sharing ideas, engagement, interaction, criticism, and peer review. Our scientific community extends across all the fields of our discipline and beyond to other social science disciplines and the interdisciplinary communities focused on particular policy issues from economic development to climate change and beyond. Political science is closer to this ideal than ever, more unified than ever on standards, and more diverse than ever on data sources, and—we think as a result—is making faster progress than ever. We can’t wait to see what we all learn next.

    Appendix

    We began our research with all political science journals in 2019 sorted by impact factor and chose the top ten that were active in both 2019 and 1990.¹ (These included the American Journal of Political Science, American Political Science Review, British Journal of Political Science, Public Administration, European Journal of Political Research, International Organization, Journal of Conflict Resolution, Journal of Politics, Political Psychology, and World Politics.) We then collected 400 observations as follows. We selected, by simple random sampling from the set of all articles these journals published, 200 from 1990 and 200 from 2019. (We drew half of these samples, performed the analysis below, and then repeated the exercise to reach our total n; results were almost the same for the two separate samples.)

    We then coded each article as follows: (1) Is it quantitative or qualitative? I.e., does it use large volumes of data with some type of data analysis methodology, or a small number of anecdotes or cases? (2) If it is qualitative, does it ask an inferential question (using facts we know to learn about facts we do not know). (3) If it is qualitative and asks an inferential question, does it use the language of inference as introduced in KKV?

    To avoid our own potential biases, we hired a college graduate who had not taken courses with us, asked him to read KKV, and to make the judgments in (3) by reading relevant sections of each article to determine whether the relevant concepts were discussed. For example, in a discussion of interviews with former Soviet bureaucrats, is there consideration of the various elements of bias that could arise based on which interviewers were willing to talk? Or, for a case study of Australian international leadership, does the paper discuss the degree to which the case study’s findings may be limited to Australia, or to that particular negotiation? We checked inter-coder reliability by comparing our views of individual articles to those of our research assistant and found no disagreements.

    Finally, we compare the answers to (3) in 1990 versus 2019 by asking: Of the quantitative empirical papers, how many are using the language of inference? In 1990, 13 percent of articles used this language, whereas in 2019, 71 percent did. Taking into account the omnipresent likelihood of measurement errors, judgment calls, and the complexity of human language in these articles, it is difficult to imagine finding a larger effect.

    Reference

    Keohane, Robert O., and Gary King. 2021. Replication Data for: Designing Social Inquiry: Scientific Inference for Qualitative Research. https://doi.org/10.7910/DVN/YHZG5M, Harvard Dataverse, V1, UNF:6:2HECL90TQQxdW2/NYMrkbg== [fileUNF].

    Note

    1. All information necessary to replicate the results in this analysis is available in Keohane and King (2021). Our thanks to Zagreb Mukerjee for superb research assistance on this appendix.

    PREFACE TO THE FIRST EDITION

    In this book we develop a unified approach to valid descriptive and causal inference in qualitative research, where numerical measurement is either impossible or undesirable. We argue that the logic of good quantitative and good qualitative research designs do not fundamentally differ. Our approach applies equally to these apparently different forms of scholarship.

    Our goal in writing this book is to encourage qualitative researchers to take scientific inference seriously and to incorporate it into their work. We hope that our unified logic of inference, and our attempt to demonstrate that this unified logic can be helpful to qualitative researchers, will help improve the work in our discipline and perhaps aid research in other social sciences as well. Thus, we hope that this book is read and critically considered by political scientists and other social scientists of all persuasions and career stages—from qualitative field researchers to statistical analysts, from advanced undergraduates and first-year graduate students to senior scholars. We use some mathematical notation because it is especially helpful in clarifying concepts in qualitative methods; however, we assume no prior knowledge of mathematics or statistics, and most of the notation can be skipped without loss of continuity.

    University administrators often speak of the complementarity of teaching and research. Indeed, teaching and research are very nearly coincident, in that they both entail acquiring new knowledge and communicating it to others, albeit in slightly different forms. This book attests to the synchronous nature of these activities. Since 1989, we have been working on this book and jointly teaching the graduate seminar Qualitative Methods in Social Science in Harvard University’s Department of Government. The seminar has been very lively, and it often has spilled into the halls and onto the pages of lengthy memos passed among ourselves and our students. Our intellectual battles have always been friendly, but our rules of engagement meant that agreeing to disagree and compromising were high crimes. If one of us was not truly convinced of a point, we took it as our obligation to continue the debate. In the end, we each learned a great deal about qualitative and quantitative research from one another and from our students and changed many of our initial positions. In addition to its primary purposes, this book is a statement of our hard-won unanimous position on scientific inference in qualitative research.

    We completed the first version of this book in 1991 and have revised it extensively in the years since. Gary King first suggested that we write this book, drafted the first versions of most chapters, and took the lead through the long process of revision. However, the book has been rewritten so extensively by Robert Keohane and Sidney Verba, as well as Gary King, that it would be impossible for us to identify the authorship of many passages and sections reliably.

    During this long process, we circulated drafts to colleagues around the United States and are indebted to them for the extraordinary generosity of their comments. We are also grateful to the graduate students who have been exposed to this manuscript both at Harvard and at other universities and whose reactions have been important to us in making revisions. Trying to list all the individuals who were helpful in a project such as this is notoriously hazardous (we estimate the probability of inadvertently omitting someone whose comments were important to us to be 0.92). We wish to acknowledge the following individuals: Christopher H. Achen, John Aldrich, Hayward Alker, Robert H. Bates, James Battista, Nathaniel Beck, Nancy Burns, Michael Cobb, David Collier, Gary Cox, Michael C. Desch, David Dessler, Jorge Domínguez, George Downs, Mitchell Duneier, Matthew Evangelista, John Ferejohn, Andrew Gelman, Alexander George, Joshua Goldstein, Andrew Green, David Green, Robin Hanna, Michael Hiscox, James E. Jones, Sr., Miles Kahler, Elizabeth King, Alexander Kozhemiakin, Stephen D. Krasner, Herbert Kritzer, James Kuklinski, Nathan Lane, Peter Lange, Tony Lavelle, Judy Layzer, Jack S. Levy, Daniel Little, Sean Lynn-Jones, Lisa L. Martin, Helen Milner, Gerardo L. Munck, Timothy P. Nokken, Joseph S. Nye, Charles Ragin, Swarna Rajagopalan, Shamara Shantu Riley, David Rocke, David Rohde, Frances Rosenbluth, David Schwieder, Collins G. Shackelford, Jr., Kenneth Shepsle, Daniel Walsh, Carolyn Warner, Steve Aviv Yetiv, Mary Zerbinos, and Michael Zürn. Our appreciation goes to Steve Voss for preparing the index, and to the crew at Princeton University Press, Walter Lippincott, Malcolm DeBevoise, Peter Dougherty, and Alessandra Bocco. Our thanks also go to the National Science Foundation for research grant SBR-9223637 to Gary King. Robert O. Keohane is grateful to the John Simon Guggenheim Memorial Foundation for a fellowship during the term of which work on this book was completed.

    We (in various permutations and combinations) were also extremely fortunate to have had the opportunity to present earlier versions of this book in seminars and panels at the Midwest Political Science Association meetings (Chicago, 2–6 April 1990), the Political Methodology Group meetings (Duke University, 18–20 July 1990), the American Political Science Association meetings (Washington, D.C., 29 August–1 September 1991), the Seminar in the Methodology and Philosophy of the Social Sciences (Harvard University, Center for International Affairs, 25 September 1992), the Colloquium Series of the Interdisciplinary Consortium for Statistical Applications (Indiana University, 4 December 1991), the Institute for Global Cooperation and Change seminar series (University of California, Berkeley, 15 January 1993), and the University of Illinois, Urbana–Champaign (18 March 1993).

    Gary King

    Robert O. Keohane

    Sidney Verba

    Cambridge, Massachusetts

    DESIGNING SOCIAL INQUIRY

    1

    The Science in Social Science

    1.1 Introduction

    This book is about research in the social sciences. Our goal is practical: designing research that will produce valid inferences about social and political life. We focus on political science, but our argument applies to other disciplines such as sociology, anthropology, history, economics, and psychology and to nondisciplinary areas of study such as legal evidence, education research, and clinical reasoning.

    This is neither a work in the philosophy of the social sciences nor a guide to specific research tasks such as the design of surveys, conduct of field work, or analysis of statistical data. Rather, this is a book about research design: how to pose questions and fashion scholarly research to make valid descriptive and causal inferences. As such, it occupies a middle ground between abstract philosophical debates and the hands-on techniques of the researcher and focuses on the essential logic underlying all social scientific research.

    1.1.1 TWO STYLES OF RESEARCH, ONE LOGIC OF INFERENCE

    Our main goal is to connect the traditions of what are conventionally denoted quantitative and qualitative research by applying a unified logic of inference to both. The two traditions appear quite different; indeed they sometimes seem to be at war. Our view is that these differences are mainly ones of style and specific technique. The same underlying logic provides the framework for each research approach. This logic tends to be explicated and formalized clearly in discussions of quantitative research methods. But the same logic of inference underlies the best qualitative research, and all qualitative and quantitative researchers would benefit by more explicit attention to this logic in the course of designing research.

    The styles of quantitative and qualitative research are very different. Quantitative research uses numbers and statistical methods. It tends to be based on numerical measurements of specific aspects of phenomena; it abstracts from particular instances to seek general description or to test causal hypotheses; it seeks measurements and analyses that are easily replicable by other researchers.

    Qualitative research, in contrast, covers a wide range of approaches, but by definition, none of these approaches relies on numerical measurements. Such work has tended to focus on one or a small number of cases, to use intensive interviews or depth analysis of historical materials, to be discursive in method, and to be concerned with a rounded or comprehensive account of some event or unit. Even though they have a small number of cases, qualitative researchers generally unearth enormous amounts of information from their studies. Sometimes this kind of work in the social sciences is linked with area or case studies where the focus is on a particular event, decision, institution, location, issue, or piece of legislation. As is also the case with quantitative research, the instance is often important in its own right: a major change in a nation, an election, a major decision, or a world crisis. Why did the East German regime collapse so suddenly in 1989? More generally, why did almost all the communist regimes of Eastern Europe collapse in 1989? Sometimes, but certainly not always, the event may be chosen as an exemplar of a particular type of event, such as a political revolution or the decision of a particular community to reject a waste disposal site. Sometimes this kind of work is linked to area studies where the focus is on the history and culture of a particular part of the world. The particular place or event is analyzed closely and in full detail.

    For several decades, political scientists have debated the merits of case studies versus statistical studies, area studies versus comparative studies, and scientific studies of politics using quantitative methods versus historical investigations relying on rich textual and contextual understanding. Some quantitative researchers believe that systematic statistical analysis is the only road to truth in the social sciences. Advocates of qualitative research vehemently disagree. This difference of opinion leads to lively debate; but unfortunately, it also bifurcates the social sciences into a quantitative-systematic-generalizing branch and a qualitative-humanistic-discursive branch. As the former becomes more and more sophisticated in the analysis of statistical data (and their work becomes less comprehensible to those who have not studied the techniques), the latter becomes more and more convinced of the irrelevance of such analyses to the seemingly non-replicable and nongeneralizable events in which its practitioners are interested.

    A major purpose of this book is to show that the differences between the quantitative and qualitative traditions are only stylistic and are methodologically and substantively unimportant. All good research can be understood—indeed, is best understood—to derive from the same underlying logic of inference. Both quantitative and qualitative research can be systematic and scientific. Historical research can be analytical, seeking to evaluate alternative explanations through a process of valid causal inference. History, or historical sociology, is not incompatible with social science (Skocpol 1984: 374–86).

    Breaking down these barriers requires that we begin by questioning the very concept of qualitative research. We have used the term in our title to signal our subject matter, not to imply that qualitative research is fundamentally different from quantitative research, except in style.

    Most research does not fit clearly into one category or the other. The best often combines features of each. In the same research project, some data may be collected that is amenable to statistical analysis, while other equally significant information is not. Patterns and trends in social, political, or economic behavior are more readily subjected to quantitative analysis than is the flow of ideas among people or the difference made by exceptional individual leadership. If we are to understand the rapidly changing social world, we will need to include information that cannot be easily quantified as well as that which can. Furthermore, all social science requires comparison, which entails judgments of which phenomena are more or less alike in degree (i.e., quantitative differences) or in kind (i.e., qualitative differences).

    Two excellent recent studies exemplify this point. In Coercive Cooperation (1992), Lisa L. Martin sought to explain the degree of international cooperation on economic sanctions by quantitatively analyzing ninety-nine cases of attempted economic sanctions from the post–World War II era. Although this quantitative analysis yielded much valuable information, certain causal inferences suggested by the data were ambiguous; hence, Martin carried out six detailed case studies of sanctions episodes in an attempt to gather more evidence relevant to her causal inference. For Making Democracy Work (1993), Robert D. Putnam and his colleagues interviewed 112 Italian regional councillors in 1970, 194 in 1976, and 234 in 1981–1982, and 115 community leaders in 1976 and 118 in 1981–1982. They also sent a mail questionnaire to over 500 community leaders throughout the country in 1983. Four nationwide mass surveys were undertaken especially for this study. Nevertheless, between 1976 and 1989 Putnam and his colleagues conducted detailed case studies of the politics of six regions. Seeking to satisfy the interocular traumatic test, the investigators gained an intimate knowledge of the internal political maneuvering and personalities that have animated regional politics over the last two decades (Putnam 1993:190).

    The lessons of these efforts should be clear: neither quantitative nor qualitative research is superior to the other, regardless of the research problem being addressed. Since many subjects of interest to social scientists cannot be meaningfully formulated in ways that permit statistical testing of hypotheses with quantitative data, we do not wish to encourage the exclusive use of quantitative techniques. We are not trying to get all social scientists out of the library and into the computer center, or to replace idiosyncratic conversations with structured interviews. Rather, we argue that nonstatistical research will produce more reliable results if researchers pay attention to the rules of scientific inference—rules that are sometimes more clearly stated in the style of quantitative research. Precisely defined statistical methods that undergird quantitative research represent abstract formal models applicable to all kinds of research, even that for which variables cannot be measured quantitatively. The very abstract, and even unrealistic, nature of statistical models is what makes the rules of inference shine through so clearly.

    The rules of inference that we discuss are not relevant to all issues that are of significance to social scientists. Many of the most important questions concerning political life—about such concepts as agency, obligation, legitimacy, citizenship, sovereignty, and the proper relationship between national societies and international politics—are philosophical rather than empirical. But the rules are relevant to all research where the goal is to learn facts about the real world. Indeed, the distinctive characteristic that sets social science apart from casual observation is that social science seeks to arrive at valid inferences by the systematic use of well-established procedures of inquiry. Our focus here on empirical research means that we sidestep many issues in the philosophy of social science as well as controversies about the role of postmodernism, the nature and existence of truth, relativism, and related subjects. We assume that it is possible to have some knowledge of the external world but that such knowledge is always uncertain.

    Furthermore, nothing in our set of rules implies that we must run the perfect experiment (if such a thing existed) or collect all relevant data before we can make valid social scientific inferences. An important topic is worth studying even if very little information is available. The result of applying any research design in this situation will be relatively uncertain conclusions, but so long as we honestly report our uncertainty, this kind of study can be very useful. Limited information is often a necessary feature of social inquiry. Because the social world changes rapidly, analyses that help us understand those changes require that we describe them and seek to understand them contemporaneously, even when uncertainty about our conclusions is high. The urgency of a problem may be so great that data gathered by the most useful scientific methods might be obsolete before it can be accumulated. If a distraught person is running at us swinging an ax, administering a five-page questionnaire on psychopathy may not be the best strategy. Joseph Schumpeter once cited Albert Einstein, who said as far as our propositions are certain, they do not say anything about reality, and as far as they do say anything about reality, they are not certain (Schumpeter [1936] 1991:298–99). Yet even though certainty is unattainable, we can improve the reliability, validity, certainty, and honesty of our conclusions by paying attention to the rules of scientific inference. The social science we espouse seeks to make descriptive and causal inferences about the world. Those who do not share the assumptions of partial and imperfect knowability and the aspiration for descriptive and causal understanding will have to look elsewhere for inspiration or for paradigmatic battles in which to engage.

    In sum, we do not provide recipes for scientific empirical research. We offer a number of precepts and rules, but these are meant to discipline thought, not stifle it. In both quantitative and qualitative research, we engage in the imperfect application of theoretical standards of inference to inherently imperfect research designs and empirical data. Any meaningful rules admit of exceptions, but we can ask that exceptions be justified explicitly, that their implications for the reliability of research be assessed, and that the uncertainty of conclusions be reported. We seek not dogma, but disciplined thought.

    1.1.2 DEFINING SCIENTIFIC RESEARCH IN THE SOCIAL SCIENCES

    Our definition of scientific research is an ideal to which any actual quantitative or qualitative research, even the most careful, is only an approximation. Yet, we need a definition of good research, for which we use the word scientific as our descriptor.¹ This word comes with many connotations

    Enjoying the preview?
    Page 1 of 1