Advances in Comparative Survey Methods: Multinational, Multiregional, and Multicultural Contexts (3MC)
()
About this ebook
Covers the latest methodologies and research on international comparative surveys with contributions from noted experts in the field
Advances in Comparative Survey Methodology examines the most recent advances in methodology and operations as well as the technical developments in international survey research. With contributions from a panel of international experts, the text includes information on the use of Big Data in concert with survey data, collecting biomarkers, the human subject regulatory environment, innovations in data collection methodology and sampling techniques, use of paradata across the survey lifecycle, metadata standards for dissemination, and new analytical techniques.
This important resource:
- Contains contributions from key experts in their respective fields of study from around the globe
- Highlights innovative approaches in resource poor settings, and innovative approaches to combining survey and other data
- Includes material that is organized within the total survey error framework
- Presents extensive and up-to-date references throughout the book
Written for students and academic survey researchers and market researchers engaged in comparative projects, this text represents a unique collaboration that features the latest methodologies and research on global comparative surveys.
Related to Advances in Comparative Survey Methods
Titles in the series (27)
Applied Survey Methods: A Statistical Perspective Rating: 0 out of 5 stars0 ratingsAdvances in Telephone Survey Methodology Rating: 0 out of 5 stars0 ratingsIntroduction to Survey Quality Rating: 0 out of 5 stars0 ratingsAnalysis of Health Surveys Rating: 0 out of 5 stars0 ratingsMethods for Testing and Evaluating Survey Questionnaires Rating: 0 out of 5 stars0 ratingsLatent Class Analysis of Survey Error Rating: 0 out of 5 stars0 ratingsDesigning and Conducting Business Surveys Rating: 0 out of 5 stars0 ratingsEnvisioning the Survey Interview of the Future Rating: 0 out of 5 stars0 ratingsQuestion Evaluation Methods: Contributing to the Science of Data Quality Rating: 0 out of 5 stars0 ratingsComplex Surveys: A Guide to Analysis Using R Rating: 0 out of 5 stars0 ratingsStatistical Matching: Theory and Practice Rating: 0 out of 5 stars0 ratingsNonresponse in Household Interview Surveys Rating: 0 out of 5 stars0 ratingsStatistical Disclosure Control Rating: 0 out of 5 stars0 ratingsMethodology of Longitudinal Surveys Rating: 0 out of 5 stars0 ratingsAnalysis of Poverty Data by Small Area Estimation Rating: 0 out of 5 stars0 ratingsImproving Survey Response: Lessons Learned from the European Social Survey Rating: 0 out of 5 stars0 ratingsEstimation in Surveys with Nonresponse Rating: 0 out of 5 stars0 ratingsOnline Panel Research: A Data Quality Perspective Rating: 0 out of 5 stars0 ratingsSmall Area Estimation Rating: 0 out of 5 stars0 ratingsRegister-based Statistics: Statistical Methods for Administrative Data Rating: 0 out of 5 stars0 ratingsTotal Survey Error in Practice Rating: 0 out of 5 stars0 ratingsAdvances in Comparative Survey Methods: Multinational, Multiregional, and Multicultural Contexts (3MC) Rating: 0 out of 5 stars0 ratingsCognitive Interviewing Methodology Rating: 0 out of 5 stars0 ratingsImplementation of Large-Scale Education Assessments Rating: 0 out of 5 stars0 ratings
Related ebooks
Total Survey Error in Practice Rating: 0 out of 5 stars0 ratingsSurvey Measurement and Process Quality Rating: 0 out of 5 stars0 ratingsImplementation of Large-Scale Education Assessments Rating: 0 out of 5 stars0 ratingsThe Wiley Handbook of Teaching and Learning Rating: 0 out of 5 stars0 ratingsOnline Panel Research: A Data Quality Perspective Rating: 0 out of 5 stars0 ratingsMethods and Applications of Statistics in Clinical Trials, Volume 2: Planning, Analysis, and Inferential Methods Rating: 0 out of 5 stars0 ratingsDescriptive Analysis in Sensory Evaluation Rating: 0 out of 5 stars0 ratingsHandbook of Health Survey Methods Rating: 0 out of 5 stars0 ratingsHow to Design, Analyse and Report Cluster Randomised Trials in Medicine and Health Related Research Rating: 0 out of 5 stars0 ratingsThe Social Studies Teacher's Toolbox: Hundreds of Practical Ideas to Support Your Students Rating: 0 out of 5 stars0 ratingsResearch Methods in Human-Computer Interaction Rating: 3 out of 5 stars3/5Statistics in Medicine Rating: 4 out of 5 stars4/5Writing Built Environment Dissertations and Projects: Practical Guidance and Examples Rating: 0 out of 5 stars0 ratingsClinical Decision Support and Beyond: Progress and Opportunities in Knowledge-Enhanced Health and Healthcare Rating: 0 out of 5 stars0 ratingsModern Industrial Statistics: with applications in R, MINITAB and JMP Rating: 0 out of 5 stars0 ratingsApplied Research Methods in Public and Nonprofit Organizations Rating: 0 out of 5 stars0 ratingsModern Research Design: The Best Approach To Qualitative And Quantitative Data Rating: 0 out of 5 stars0 ratingsThe Wiley Blackwell Handbook of the Psychology of Recruitment, Selection and Employee Retention Rating: 0 out of 5 stars0 ratingsInformation Quality: The Potential of Data and Analytics to Generate Knowledge Rating: 0 out of 5 stars0 ratingsLearning Assessment Techniques: A Handbook for College Faculty Rating: 0 out of 5 stars0 ratingsThe Role of the Study Director in Nonclinical Studies: Pharmaceuticals, Chemicals, Medical Devices, and Pesticides Rating: 0 out of 5 stars0 ratingsHandbook of Molecular Microbial Ecology I: Metagenomics and Complementary Approaches Rating: 0 out of 5 stars0 ratingsMultilevel Statistical Models Rating: 0 out of 5 stars0 ratingsTools for Policy Analysis and Management: A Practitioner’S Guide Rating: 5 out of 5 stars5/5Planning Programs for Adult Learners: A Practical Guide Rating: 5 out of 5 stars5/5Distributed Learning: Pedagogy and Technology in Online Information Literacy Instruction Rating: 0 out of 5 stars0 ratingsDevelopment of Creative Spaces in Academic Libraries: A Decision Maker's Guide Rating: 5 out of 5 stars5/5Quality metrics for semantic interoperability in Health Informatics Rating: 0 out of 5 stars0 ratingsThe Wiley Blackwell Handbook of the Psychology of Team Working and Collaborative Processes Rating: 0 out of 5 stars0 ratingsPharmacometrics: The Science of Quantitative Pharmacology Rating: 0 out of 5 stars0 ratings
Mathematics For You
The Everything Guide to Algebra: A Step-by-Step Guide to the Basics of Algebra - in Plain English! Rating: 4 out of 5 stars4/5Quantum Physics for Beginners Rating: 4 out of 5 stars4/5Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics Rating: 4 out of 5 stars4/5My Best Mathematical and Logic Puzzles Rating: 5 out of 5 stars5/5The Thirteen Books of the Elements, Vol. 1 Rating: 0 out of 5 stars0 ratingsBasic Math & Pre-Algebra For Dummies Rating: 4 out of 5 stars4/5Game Theory: A Simple Introduction Rating: 4 out of 5 stars4/5The Little Book of Mathematical Principles, Theories & Things Rating: 3 out of 5 stars3/5Algebra - The Very Basics Rating: 5 out of 5 stars5/5Mental Math Secrets - How To Be a Human Calculator Rating: 5 out of 5 stars5/5Calculus Made Easy Rating: 4 out of 5 stars4/5Real Estate by the Numbers: A Complete Reference Guide to Deal Analysis Rating: 0 out of 5 stars0 ratingsFlatland Rating: 4 out of 5 stars4/5Algebra I Workbook For Dummies Rating: 3 out of 5 stars3/5Logicomix: An epic search for truth Rating: 4 out of 5 stars4/5Alan Turing: The Enigma: The Book That Inspired the Film The Imitation Game - Updated Edition Rating: 4 out of 5 stars4/5See Ya Later Calculator: Simple Math Tricks You Can Do in Your Head Rating: 4 out of 5 stars4/5The Everything Everyday Math Book: From Tipping to Taxes, All the Real-World, Everyday Math Skills You Need Rating: 5 out of 5 stars5/5Basic Math Notes Rating: 5 out of 5 stars5/5Algebra I For Dummies Rating: 4 out of 5 stars4/5Relativity: The special and the general theory Rating: 5 out of 5 stars5/5The Golden Ratio: The Divine Beauty of Mathematics Rating: 5 out of 5 stars5/5The Math of Life and Death: 7 Mathematical Principles That Shape Our Lives Rating: 4 out of 5 stars4/5Is God a Mathematician? Rating: 4 out of 5 stars4/5Introducing Game Theory: A Graphic Guide Rating: 4 out of 5 stars4/5
Reviews for Advances in Comparative Survey Methods
0 ratings0 reviews
Book preview
Advances in Comparative Survey Methods - Timothy P. Johnson
Preface
This book is the product of a multinational, multiregional, and multicultural (3MC) collaboration. It summarizes work initially presented at the Second International 3MC Conference that was held in Chicago during July 2016. The conference drew participants from 78 organizations and 32 countries. We are thankful to them all for their contributions. We believe the enthusiasm on display throughout the 2016 Conference has been captured in these pages and hope it can serve as a useful platform for providing direction to future advancements in 3MC research over the next decade.
The conference follows from the Comparative Survey Design and Implementation Workshops held yearly since 2003 (see https://www.csdiworkshop.org/). These workshops provide a forum and platform for those involved in research relevant to comparative survey methods.
We have many colleagues to thank for their efforts in support of this monograph. In particular, we are grateful to multiple staff at the University of Michigan, including Jamal Ali, Nancy Bylica, Kristen Cibelli Hibben, Mengyao Hu, Julie de Jong, Lawrence La Ferté, Ashanti Harris, Jennifer Kelley, and Yu‐chieh (Jay) Lin.
We are particularly indebted to Lars Lyberg, who pushed us to make every element of this book as strong as possible and provided detailed comments on the text.
We also thank the various committees that helped to organize the conference:
Conference Executive Committee
Beth‐Ellen Pennell (chair), University of Michigan
Timothy P. Johnson, University of Illinois at Chicago
Lars Lyberg, Inizio
Peter Ph. Mohler, COMPASS and University of Mannheim
Alisú Schoua‐Glusberg, Research Support Services
Tom W. Smith, NORC at the University of Chicago
Ineke A.L. Stoop, Institute for Social Research/SCP and the European Social Survey
Christof Wolf, GESIS‐Leibniz‐Institute for the Social Sciences
Conference Organizing Committee
Jennifer Kelley (chair), University of Michigan
Nancy Bylica, University of Michigan
Ashanti Harris, University of Michigan
Mengyao Hu, University of Michigan
Lawrence La Ferté, University of Michigan
Yu‐chieh (Jay) Lin, University of Michigan
Beth‐Ellen Pennell, University of Michigan
Conference Fundraising Committee
Peter Ph. Mohler (chair), COMPASS and University of Mannheim
Rachel Caspar, RTI International
Michele Ernst Staehli, FORS
Beth‐Ellen Pennell, University of Michigan
Evi Scholz, GESIS‐Leibniz‐Institute for the Social Sciences
Yongwei Yang, Google, Inc.
Conference Monograph Committee
Timothy P. Johnson (chair), University of Illinois at Chicago
Brita Dorer, GESIS‐Leibniz‐Institute for the Social Sciences
Beth‐Ellen Pennell, University of Michigan
Ineke A.L. Stoop, Institute for Social Research/SCP and the European Social Survey
Conference Short Course Committee
Alisú Schoua‐Glusberg (chair), Research Support Services
Brita Dorer, GESIS‐Leibniz‐Institute for the Social Sciences
Yongwei Yang, Google, Inc.
Support for the Second 3MC Conference was also multinational, and we wish to acknowledge and thank the following organizations for their generosity in helping to sponsor the Conference:
American Association for Public Opinion Research (AAPOR)
cApStAn
Compass, Mannheim, Germany
D3 Systems, Inc.
Data Documentation Initiative
European Social Survey
FORS
GESIS‐Leibniz‐Institute for the Social Sciences
Graduate Program in Survey Research, Department of Public Policy, University of Connecticut
ICPSR, University of Michigan
IMPAQ International
International Statistical Institute
Ipsos Public Affairs
John Wiley & Sons
Joint Program in Survey Methodology, University of Maryland
Mathematica Policy Research
ME/Max Planck Institute for Social Law and Social Policy
Nielsen
NORC at the University of Chicago
Oxford University Press
Program in Survey Methodology, University of Michigan
Research Support Services, Inc.
RTI International
Survey Methods Section, American Statistical Association
Survey Research Center, Institute for Social Research, University of Michigan
Survey Lab, University of Chicago
WAPOR
Westat
In addition, we owe a special debt of gratitude to the University of Michigan’s Institute for Social Research for their exceptional support during the several years it has taken to organize and prepare this monograph.
We also thank the editors at Wiley, Divya Narayanan, Jon Gurstelle, and Kshitija Iyer who have provided us with excellent support throughout the development and production process. We also thank our editors at the University of Michigan, including Gail Arnold, Nancy Bylica, Julie de Jong, and Mengyao Hu for all of their hard work and perseverance in formatting this book. Finally, the book cover was design by Jennifer Kelley who created a word cloud from the 2016 3MC Conference program.
This monograph is dedicated to the late Dr. Janet Harkness, who helped organize and lead the 3MC movement for many years. We have worked hard to make this contribution something she would be proud of.
8 June 2017
Timothy P. Johnson
Beth‐Ellen Pennell
Ineke A.L. Stoop
Brita Dorer
Notes on Contributors
Yasmin Altwaijri
King Faisal Specialized Hospital and Research Center
Riyadh
Kingdom Saudi Arabia
Anna V. Andreenkova
Institute for Comparative Social Research (CESSI)
Moscow
Russia
Dorothée Behr
GESIS – Leibniz Institute for the Social Sciences
Mannheim
Germany
Isabel Benitez
Department of Psychology
Universidad Loyola Andalucía
Seville
Spain
Annelies G. Blom
Department of Political Science and Collaborative Research Center 884 Political Economy of Reforms
University of Mannheim
Mannheim
Germany
Axel Börsch‐Supan
Max Planck Institute for Social Law and Social Policy
Munich
Germany
Ralph Carstens
International Association for the Evaluation of Educational Achievement (IEA)
Hamburg
Germany
Noel Chavez
School of Public Health
University of Illinois at Chicago
Chicago, IL
USA
Young Ik Cho
Zilber School of Public Health
University of Wisconsin‐Milwaukee
Milwaukee, WI
USA
Kristen Cibelli Hibben
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Jan Cieciuch
University Research Priority Program Social Networks
University of Zurich
Zurich
Switzerland
and
Institute of Psychology
Cardinal Wyszynski University in Warsaw
Warsaw
Poland
Eldad Davidov
Institute of Sociology and Social Psychology
University of Cologne
Cologne
Germany
and
Department of Sociology and University Research Priority Program Social Networks
University of Zurich
Zurich
Switzerland
Julie A.J. de Jong
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Edith D. de Leeuw
Department of Methodology and Statistics
Utrecht University
Utrecht
The Netherlands
Steve Dept
cApStAn Linguistic Quality Control
Brussels
Belgium
Deana Desa
International Association for the Evaluation of Educational Achievement (IEA)
Hamburg
Germany
and
TOM TAILOR GmbHHamburgGermany
Jill A. Dever
RTI International
Washington, DC
USA
Wil Dijkstra
Faculty of Social Sciences
VU University Amsterdam
Amsterdam
The Netherlands
Brita Dorer
GESIS – Leibniz Institute for the Social Sciences
Mannheim
Germany
Stephanie Eckman
RTI International
Washington, DC
USA
Irmtraud N. Gallhofer
European Social Survey
RECSM
Universitat Pompeu Fabra
Barcelona
Spain
Justin Gengler
Social and Economic Survey Research Institute (SESRI)
Qatar University
Doha
Qatar
Dirgha J. Ghimire
Population Studies Center
University of Michigan
Ann Arbor, MI
USA
Patricia L. Goerman
Center for Survey Measurement
US Census Bureau
Washington, DC
USA
Peter Granda
Inter‐university Consortium for Political and Social Research
University of Michigan
Ann Arbor, MI
USA
David Grant
RAND
Santa Monica, CA
USA
Heidi Guyer
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Marieke Haan
Faculty of Behavioural and Social Sciences
Sociology Department
University of Groningen
Groningen
The Netherlands
Steven G. Heeringa
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Kristen Himelein
World Bank
Washington, DC
USA
Allyson Holbrook
Survey Research Laboratory
University of Illinois at Chicago
Chicago, IL
USA
David Howell
Center for Political Studies
University of Michigan
Ann Arbor, MI
USA
Joop J. Hox
Department of Methodology and Statistics
Utrecht University
Utrecht
The Netherlands
Mengyao Hu
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Sarah M. Hughes
Mathematica Policy Research
Chicago, IL
USA
Matt Jans
ICF International
Rockville, MD
USA
Lilli Japec
Statistics Sweden
Stockholm
Sweden
Debra Javeline
Department of Political Science
University of Notre Dame
Notre Dame, IN
USA
Timothy P. Johnson
Survey Research Laboratory
University of Illinois at Chicago
Chicago, IL
USA
Jennifer Kelley
Institute for Social and Economic Research
University of Essex
Colchester, UK
and
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Florian Keusch
Department of Sociology
University of Mannheim
Mannheim
Germany
Achim Koch
GESIS – Leibniz Institute for the Social Sciences
Mannheim
Germany
Marta Kołczyńska
Institute of Philosophy and Sociology
Polish Academy of Sciences
Warsaw, Poland
Kirstine Kolsrud
NSD – Norwegian Centre for Research Data
Bergen
Norway
Elica Krajčeva
cApStAn Linguistic Quality Control
Brussels
Belgium
Jon A. Krosnick
Departments of Communication, Political Science, and Psychology
Stanford University
Stanford, CA
USA
Ashish Kumar Gupta
Kantar Public
Delhi
India
Charles Q. Lau
RTI International
Durham, NC
USA
Kien Trung Le
Social and Economic Survey Research Institute (SESRI)
Qatar University
Doha
Qatar
Sunghee Lee
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Eva Leissou
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Kimberley Lek
Department of Methodology and Statistics
Utrecht University,
Utrecht
The Netherlands
Yu‐chieh (Jay) Lin
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Oliver Lipps
FORS
c/o University of Lausanne
Lausanne
Switzerland
Mingnan Liu
Menlo Park, CA
USA
Lars Lyberg
Inizio
Stockholm
Sweden
Frederic Malter
Max‐Planck‐Institute for Social Law and Social Policy
Munich
Germany
Ellen Marks
RTI International
Durham, NC
USA
Kevin McLaughlin
AT&T, Los Angeles, CAUSA
Mikelyn Meyers
Center for Survey Measurement
US Census Bureau
Washington, DC
USA
Kristen Miller
National Center for Health Statistics
Hyattsville, MD
USA
Zeina Mneimneh
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Peter Ph. Mohler
COMPASS
and
Department of Sociology
University of Mannheim,
Manheim
Germany
J. Daniel Montalvo
Department of Political Science and Latin American Public Opinion Project
Vanderbilt University
Nashville, TN
USA
Daniel Oberski
Department of Methodology and Statistics
Utrecht University
Utrecht
The Netherlands
Michael Ochsner
FORS
c/o University of Lausanne
Lausanne
Switzerland
Olena Oleksiyenko
Institute of Philosophy and Sociology
Polish Academy of Sciences
Warsaw
Poland
Yfke P. Ongena
Faculty of Arts
Center for Language and Cognition
University of Groningen
Groningen
The Netherlands
Jose‐Luis Padilla
Department of Methodology of Behavioral Sciences
University of Granada,
Granada
Spain
Hyunjoo Park
HP Research
Seoul
Korea
Royce Park
California Health Interview Survey
UCLA Center for Health Policy Research
Los Angeles, CA
USA
Beth‐Ellen Pennell
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Emilia Peytcheva
RTI International
Research Triangle Park, NC
USA
Ninez A. Ponce
California Health Interview Survey
UCLA Center for Health Policy Research
Los Angeles, CA
USA
Przemek Powałko
Institute of Philosophy and Sociology
Polish Academy of Sciences
Warsaw
Poland
Michael Robbins
Department of Politics
Princeton University
Princeton, NJ USA
and
Center for Political Studies
University of Michigan
Ann Arbor, MI
USA
Linn‐Merethe Rød
NSD – Norwegian Centre for Research Data
Bergen
Norway
Joseph W. Sakshaug
Institute for Employment Research
Nuremberg
Germany
Willem E. Saris
RECSM
Universitat Pompeu Fabra
Barcelona
Spain
and
University of Amsterdam
Amsterdam
The Netherlands
Dhananjay Bal Sathe
Centre for Monitoring Indian Economy Pvt Ltd.
Mumbai
India
Peter Schmidt
Department of Political Science
University of Giessen
Giessen
Germany
Matthew Schoene
Albion College
Albion, MI
USA
Alisu Schoua‐Glusberg
Research Support Services
Evanston, IL
USA
Wolfram Schulz
The Australian Council for Educational Research (ACER)
Melbourne
Australia
Norbert Schwarz
Department of Psychology
University of Southern California
Los Angeles, CA
USA
Lesli Scott
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Daniel Seddig
Institute of Sociology and Social Psychology
University of Cologne,
Cologne
Germany
and
Department of Sociology and University Research Priority Program Social Networks
University of Zurich
Zurich
Switzerland
Katrine U. Segadal
NSD – Norwegian Centre for Research Data
Bergen
Norway
Mitchell A. Seligson
Department of Political Science and Latin American Public Opinion Project
Vanderbilt University
Nashville, TN
USA
Mandy Sha
RTI International
Chicago, IL
USA
Sharan Sharma
TAM India
Mumbai
India
and
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Sharon Shavitt
Gies College of Business
University of Illinois at Urbana‐Champaign
Champaign, IL
USA
Henning Silber
GESIS – Leibniz Institute for the Social Sciences
Mannheim
Germany
Kazimierz M. Slomczynski
Institute of Philosophy and Sociology
Polish Academy of Sciences (PAN)
Warsaw
Poland
and
CONSIRT
The Ohio State University
Columbus, OH
USA
Tom W. Smith
NORC
University of Chicago
Chicago, IL
USA
Tobias H. Stark
ICS
Utrecht University
Utrecht
The Netherlands
Ineke A.L. Stoop
The Netherlands Institute for Social Research (SCP)
The Hague
The Netherlands
Z. Tuba Suzer‐Gurtekin
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Irina Tomescu‐Dubrow
Institute of Philosophy and Sociology
Polish Academy of Sciences (PAN)
Warsaw
Poland
and
CONSIRT, The Ohio State University
Columbus, OH
USA
Can Tongur
Statistics Sweden
Stockholm
Sweden
Richard Valliant
Joint Program in Survey Methodology
University of Maryland
College Park, MD
USA
Fons J.R. van de Vijver
Department of Cultural Studies
Tilburg School of Humanities and Digital Sciences
Tilburg University
Tilburg
The Netherlands;
Work Well Unit
North‐West University
Potchefstroom
South Africa
and
School of Psychology
University of Queensland
St. Lucia
Australia
Anastas Vangeli
Institute of Philosophy and Sociology
Polish Academy of Sciences
Warsaw
Poland
Joseph Viana
California Health Interview Survey
UCLA Center for Health Policy Research
Los Angeles, CA
USA
Ana Villar
European Social Survey Headquarters
City, University of London
London
UK
Mahesh Vyas
Centre for Monitoring Indian Economy Pvt Ltd.
Mumbai
India
James Wagner
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Nicole Watson
Melbourne Institute of Applied Economic and Social Research
University of Melbourne
Melbourne
Australia
Saul Weiner
College of Medicine
University of Illinois at Chicago
Chicago, IL
USA
Luzia M. Weiss
Max Planck Institute for Social Law and Social Policy
Munich
Germany
Nathalie E. Williams
Department of Sociology and Jackson School of International Studies
University of Washington
Seattle, WA
USA
Mark Wooden
Melbourne Institute of Applied Economic and Social Research
University of Melbourne
Melbourne
Australia
Ilona Wysmulek
Institute of Philosophy and Sociology
Polish Academy of Sciences
Warsaw
Poland
Hongwei Xu
Survey Research Center
University of Michigan
Ann Arbor, MI
USA
Ting Yan
Westat
Rockville, MD
USA
Diana Zavala‐Rojas
European Social Survey
RECSM
Universitat Pompeu Fabra
Barcelona
Spain
Elizabeth J. Zechmeister
Department of Political Science and Latin American Public Opinion Project
Vanderbilt University
Nashville, TN
USA
Marcin W. Zieliński
Institute of Philosophy and Sociology
Polish Academy of Sciences
and
The Robert B. Zajonc Institute for Social Studies
University of Warsaw
Warsaw
Poland
Section I
Introduction
1
The Promise and Challenge of 3MC Research
Timothy P. Johnson1, Beth‐Ellen Pennell2, Ineke A.L. Stoop3, and Brita Dorer4
¹ Survey Research Laboratory, University of Illinois at Chicago, Chicago, IL, USA
² Survey Research Center, University of Michigan, Ann Arbor, MI, USA
³ The Netherlands Institute for Social Research (SCP), The Hague, The Netherlands
⁴ GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany
1.1 Overview
Life in the twenty‐first century becomes more interconnected daily due in large measure to increasingly complex and reliable communication and transportation networks. This growth in connectivity has also led to increased awareness and, hopefully, greater understanding and respect for individuals who represent diverse cultures, beliefs, and historical experiences. It is within this context that multinational, multiregional, and multicultural survey research, what we refer to as 3MC research, has developed over the past several decades. In addition to basic respect for human diversity, 3MC methods emphasize the importance and address the comparability of survey data across nations, regions, and cultures. These methods represent an evolution of survey methodology away from opportunistic ad hoc international data collection and analysis activities toward more coordinated efforts in which the nations, regions, and cultures of interest have equal representation and share equal responsibility for study planning and leadership.
Although precursors to 3MC research date back to the immediate post‐WWII era (see Smith [1], for a brief history of international survey research), the development and expansion of the 3MC research model became possible only more recently. The advent of formal training programs such as the Summer Institute in Survey Research Techniques at the University of Michigan and the founding of international collaborations such as the International Social Survey Programme (ISSP) and the European Social Survey (ESS) that emphasized comparative fieldwork methods [2] in particular were important precursors. These programs enabled worldwide dissemination of the methodological skills and expertise that would provide a foundation for successful 3MC efforts. Recent technological innovations, many of which are discussed in this volume, have also contributed to the growth and viability of 3MC research across diverse social, political, economic, and physical environments.
A unique contribution of 3MC research is the opportunity it represents to generate comparative knowledge that enhances human understanding and cooperation. Nations and cultural groups that have been historically ignored by the empirical social science community have found opportunities to participate and be represented in 3MC activities. 3MC research has also led to increased development and sharing of methodologies for conducting survey research in international and cross‐cultural environments. Evidence for this comes in the form of the annual meetings (since 2003) of the Comparative Survey Design and Implementation (CSDI) (https://www.csdiworkshop.org/) workshop, which focuses on the sharing of innovative methods and strategies for comparative research. It also comes in the form of larger international conferences designed to showcase achievements in the field. The first of these meetings was held in 2008 in Berlin, with a second meeting held in Chicago in 2016.
This is the third volume in the Wiley Series in Survey Methodology that focuses specifically on 3MC research practice. Although the 3MC acronym was first introduced in the 2010 volume [3, 4], the same concerns with multinational, multiregional, and multicultural research were clearly also present in the earlier volume edited by Harkness et al. [5]. We view this current volume as an extension of these earlier works, one that summarizes new 3MC developments over the past decade.
1.2 The Promise
3MC accomplishments have made a rich contribution to our knowledge of best practices for survey methodology, as the advent of work has led to the development of new and modified methodologies. Some of these accomplishments include the now commonly employed questionnaire translation and adjudication protocols first pioneered by Janet Harkness and colleagues [3, 4, 6, 7] and the efforts of Jowell et al. [8] to develop functionally equivalent fieldwork practices. Recent advances in the use of multigroup confirmatory factor analytic modeling for analysis of data from large numbers of nations [9, 10] and the procedures for cross‐cultural cognitive interviewing [11] (Chapter 10, this volume) are other examples. Countless additional developments can be found in the 800 pages of the Cross‐Cultural Survey Guidelines that are being continuously updated by the University of Michigan (http://ccsg.isr.umich.edu/). In addition, this work supported advancement in the general field of survey research. The growing availability of large numbers of national‐level surveys collected as part of 3MC initiatives, for example, enables for the first time analyses that treat surveys themselves as the unit of analysis, permitting research into basic survey design problems that were not previously possible. Several such examples are presented in this volume. In Chapter 5 of this volume, Koch examines the quality of sample composition across several types of within‐household respondent selection procedures using a sample of 153 national surveys conducted across six waves of the ESS. The findings presented make an important contribution to an often overlooked potential source of coverage and nonresponse error. Similarly, Andreenkova (Chapter 14, this volume) examines interview language choice protocols and documentation across multiple comparative projects, providing insights not previously available, and Chapters 43–47 also analyze the quality of comparative surveys across multiple dimensions.
The rapid growth in access to high quality 3MC data over the past several decades has also led to many new opportunities for social scientists to rigorously investigate social and policy relevant issues on a much larger scale than has been previously possible. These accomplishments are evident across a variety of fields and disciplines, including political science [12, 13], sociology [14], economics [15], and mental health [16]. One could make the case that the datasets produced from ongoing 3MC initiatives have led to a renaissance of sorts for empirical social science. It is also possible that a century from now these carefully documented survey archives will provide researchers with an essential resource for understanding our period in history.
1.3 The Challenge
The development and assessment of 3MC methods is of course far from, and will likely never be, complete. At the most basic level, the comparability in meaning and interpretation of measures applied across multiple groups will almost certainly continue to be challenged in many research settings. This message that cultural frameworks do not neatly map onto one another is one that readers will find being continually re‐emphasized throughout this volume. Demonstration of construct and measurement comparability by investigators will consequently continue to be a necessity. The ongoing accumulation of evidence across multiple initiatives may, however, lead to new approaches to addressing this old problem.
Another ongoing concern is the continual dominance of English as the source language for many 3MC efforts. Although a practical approach to organizing instrument development activities, this nonetheless accords what many would perceive to be undue amounts of influence to one language and cultural tradition. English is known to have a larger lexicon than any other language, which means that distinctions in wording in English cannot always be replicated in target languages [17]; in addition, the structure of English as source questionnaire language to be translated into multiple languages is challenging: Its ability to condense much information in few words often requires longer and more wordy target versions; and many target languages need to be more specific, e.g. related to gender, numerus, or terms like the following,
and if this additional information is not provided, comparability between the target versions may be impaired. These concerns related to the source language are rarely expressed but will need to be confronted proactively at some point. This brings to the surface a related issue, as survey research itself remains a Western‐oriented social scientific methodology that seems most appropriate for applications within liberal democratic political environments. It is important to be sensitive to the concern that 3MC research may be viewed in some quarters as a form of cultural hegemony. Indeed, to participate in 3MC research, some researchers and respondents must submit to modes of communication that make broad assumptions about the nature of social relationships and self‐expression that they may see as nonnormative. Understanding varying perceptions of the meaning of information collected via survey research across cultures thus remains an important challenge.
Sadly, another challenge to 3MC research that must be confronted is the growth in nationalism now being witnessed in many nations. We are concerned that many of the policies that will accompany this ideology may lead to weakened relationships and declining interest in cooperation with cross‐national, cross‐regional, and cross‐cultural populations who will inevitably be defined as out‐groups. Competition for resources and economic advantage may also undermine national willingness to participate in international research collaborations that are not viewed as bringing immediate returns on investment. Relatedly, political leaders who are willing to discredit public opinion surveys within their own nations for partisan advantage are unlikely to support broader efforts of the type represented by 3MC projects. Unfortunately, many of the social forces that have led to government cynicism, distrust of official statistics, weakened survey climate, and lower response rates in many Western nations may also be weakening public support for 3MC research. Indeed, history and recent events provide instruction regarding the fragile nature of cross‐national and cross‐cultural relationships. Ironically, 3MC research is likely to be most necessary during precisely those periods in time when it will be most challenging to undertake.
Another ongoing challenge to 3MC research is the need to further develop its theoretical underpinnings. Currently, much 3MC work is accomplished within the invaluable total survey error (TSE) framework [18]. Although important efforts have been made to integrate 3MC concerns into this paradigm (see Chapter 2 in this volume), a generalizable model of how culture influences various survey‐related error processes has yet to be established. Some potentially useful cross‐cultural frameworks have been developed in other disciplines (the models of Hofstede [19], Schwartz [20], and Triandis [21] are relevant examples), and a few initial steps have been taken in this direction [22, 23], but we are far from a consensus as to how to best proceed. Looking forward, interdisciplinary collaborations similar to those forged between survey methodologists and cognitive psychologists some 30 years ago [24] might be one productive strategy to consider. Working to establish firm theoretical foundations is an important part of 3MC’s future that has yet to be addressed.
1.4 The Current Volume
This volume contains four dozen chapters distilled from the 2016 3MC conference held in Chicago. They are organized into sections that focus on a wide variety of topics relevant to ongoing developments in applied 3MC research. In addition to this chapter, the first section includes a conceptual piece by Tom Smith (Chapter 2) that considers TSE within the context of 3MC research. In doing so, he elaborates on the concept of comparison error,
which we anticipate will become an important element of the 3MC TSE model. Chapter 3, contributed by Jose‐Luis Padilla, Isabel Benitez, and Fons J. R. van de Vijver, addresses notions of equivalence and comparability from a mixed methods perspective.
Two chapters examine sampling issues. Chapter 4, by Stephanie Eckman, Kristen Himelein, and Jill Dever, provides insights and examples of the effective use of geographic information system (GIS) technology as part of household sample designs in developing nations. As mentioned earlier, Koch examines various methods of within‐household respondent selection and their effects on data quality in Chapter 5.
The section on cross‐cultural questionnaire design and testing presents a number of important innovations. Ana Villar, Sunghee Lee, Ting Yan, and Brita Dorer first provide an overview of questionnaire design and testing within the 3MC context (Chapter 6). This is followed by a contribution from Anna Andreenkova and Debra Javeline, who discuss strategies for detecting and addressing differences in question sensitivity in a comparative context (Chapter 7). An online multinational study, designed to re‐evaluate a series of classic split‐ballot questionnaire experiments previously conducted in monocultural settings, is presented in Chapter 8 by Henning Silber, Tobias Stark, Annelies Blom, and Jon Krosnick. In Chapter 9, Mengyao Hu, Sunghee Lee, and Hongwei Xu discuss the use of anchoring vignettes and provide an empirical example that includes an innovative sensitivity analysis. Cognitive interview methods for evaluating question comparability are reviewed in Chapter 10 by Kristen Miller, and Hyunjoo Park and Patricia Goerman consider best approaches to conducting cognitive interviews with non‐English‐speaking respondents in Chapter 11. Patricia Goerman, Mikelyn Meyers, Mandy Sha, Hyunjoo Park, and Alisu Schoua‐Glusberg investigate, in Chapter 12, the degree to which monolingual and bilingual cognitive testing respondents are able to identify the same issues with survey questionnaires. The final chapter in this section (Chapter 13), by Timothy Johnson, Allyson Holbrook, Young Ik Cho, Sharon Shavitt, Noel Chavez, and Saul Weiner, investigates the usefulness of behavior coding as a method for comparing the cognitive processing of survey questions across cultural groups.
A section concerned with languages, translation, and adaptation includes four chapters. As mentioned earlier, Anna Andreenkova (Chapter 14) explores available procedures and documentation concerning the interview language selection process in 3MC surveys, a topic that has previously received little attention but has important ramifications for sample coverage, respondent cooperation, and measurement error. In Chapter 15, Emilia Peytcheva reviews the effects of interview language on respondent answers. Dorothée Behr, Steve Dept, and Elica Krajceva discuss the documentation of a sophisticated survey translation and monitoring process in Chapter 16, and Diana Zavala‐Rojas, Willem Saris, and Irmtraud Gallhofer consider, in Chapter 17, strategies for preventing differences in translated survey items using the Survey Quality Prediction (SQP) system.
In the following section, three chapters address issues relating to mixed modes and methods within the 3MC context. The first of these is Chapter 18 by Edith de Leeuw, Tuba Suzer‐Gurtekin, and Joop Hox, who provide an overview of methods for the design and implementation of mixed‐mode surveys. Chapter 19, by Tuba Suzer‐Gurtekin, Richard Valliant, Steven Heeringa, and Edith de Leeuw, provides an overview of design, estimation, and adjustment methods for mixed‐mode surveys. In Chapter 20, Nathalie Williams and Dirgha Ghimire discuss new technologies for mixed methods data collection in 3MC surveys.
In the next section, another three chapters focus on issues of response style variability across cultures. In the first of these (Chapter 21), Sunghee Lee, Florian Keusch, Norbert Schwarz, Mingnan Liu, and Tuba Suzer‐Gurtekin examine the cross‐national comparability of response patterns to subjective probability questions. In Chapter 22, Mingnan Liu, Tuba Suzer‐Gurtekin, Florian Keusch, and Sunghee Lee compare multiple methods for the detection of acquiescent and extreme response styles. Ting Yan and Mengyao Hu evaluate the effects of translation on respondent use of survey response scales when responding to a generic self‐rated health question in Chapter 23.
A large section, containing 10 chapters, explores issues of data collection in 3MC surveys. In Chapter 24, Kristen Cibelli Hibben, Beth‐Ellen Pennell, Sarah Hughes, Jennifer Kelley, and Yu‐chieh Lin present an informative set of case studies that highlight challenges to cross‐national data collection and potential solutions. Data collection challenges specific to sub‐Saharan Africa are discussed by Sarah Hughes and Yu‐chieh Lin in Chapter 25. Justin Gengler, Kien Trung Le, and David Howell, in Chapter 26, focus on data collection challenges unique to fieldwork in the Arab Gulf region. In Chapter 27, J. Daniel Montalvo, Mitchell Seligson, and Elizabeth Zechmeister provide a similar overview of their data collection experience in Latin American and Caribbean nations. Issues conducting survey research in India and China are discussed in Chapter 28 by Charles Lau, Ellen Marks, and Ashish Kumar Gupta. In Chapter 29, Nicole Watson, Eva Leissou, Heidi Guyer, and Mark Wooden present best practices for panel maintenance and retention. Luzia Weiss, Joseph Sakshaug, and Axel Börsch‐Supan provide an overview of the use of biomarkers and other biometric data in 3MC research in Chapter 30, and Yfke Ongena, Marieke Haan, and Wil Dijkstra discuss the multinational use of event history calendars in Chapter 31. Finally, Julie de Jong provides a broad overview of ethical considerations in the conduct of 3MC research in Chapter 32, and Kirstine Kolsrud, Katrine Segadal, and Linn‐Merethe Rød focus on ethical and legal issues surrounding the linking of survey and auxiliary data in Chapter 33.
Three chapters examine quality control and monitoring. Lesli Scott, Peter Mohler, and Kristen Cibelli Hibben discuss the organization and management of 3MC surveys from a TSE perspective in Chapter 34. In Chapter 35, Zeina Mneimneh, Lars Lyberg, Sharan Sharma, Mahesh Vyas, Dhananjay Bal Sathe, Frederic Malter, and Yasmin Altwaijri provide multiple case study examples of best practices for the monitoring of interviewer behaviors in 3MC research. In Chapter 36, Michael Robbins provides an overview of strategies for preventing and detecting falsification in 3MC surveys.
Survey nonresponse is also considered in a separate section containing three chapters. In the first of these (Chapter 37), James Wagner and Ineke Stoop discuss nonresponse and nonresponse bias from a comparative perspective. In Chapter 38, Matt Jans, Kevin McLaughlin, Joseph Viana, David Grant, Royce Park, and Ninez Ponce investigate cultural correlates of nonresponse in the California Health Interview Survey, and Oliver Lipps and Michael Ochsner consider, in Chapter 39, the degree to which offering respondents a greater choice of languages for completing interviews improves, or not, the representativeness of survey samples.
In the next section, two chapters address current advances in the analysis of data from 3MC surveys. In Chapter 40, Deana Desa, Fons van de Vijver, Ralph Carstens, and Wolfram Schulz discuss measurement invariance problems and solutions in international large‐scale assessments of educational achievement. In Chapter 41, Kimberley Lek, Daniel Oberski, Eldad Davidov, Jan Cieciuch, Daniel Seddig, and Peter Schmidt present an empirical application of approximate measurement invariance in 3MC research.
Another section examines data harmonization, documentation, and dissemination. An overview of these topics is presented in the introductory Chapter 42 by Peter Granda. This is followed by five chapters contributed by researchers at the CONSIRT (Cross‐National Studies: Interdisciplinary Research and Training) program at the Polish Academy of Sciences and Ohio State University. Chapter 43, by Kazimierz Slomszynski and Irina Tomescu‐Dubrow, discusses basic principles of survey data recycling. Data harmonization and data documentation quality in 3MC surveys are discussed by Maria Kolczńska and Matthew Schoene in Chapter 44. The identification of processing errors is discussed in Chapter 45 by Olena Oleksiyenko, Ilona Wysmutek, and Anastas Vangeli. In Chapter 46, Marta Kolczńska and Kazimierz Slomszynski examine the potential usefulness of item metadata as controls for ex post harmonization in cross‐national survey projects. In Chapter 47, Marcin Zielinski, Przemek Powalko, and Marta Kolczńska focus on the application of statistical weights in cross‐national survey projects.
The final chapter (48) in this volume, by Lars Lyberg, Lilli Japec, and Can Tongur, discusses some prevailing problems in 3MC research and looks forward to the future of comparative survey research. These 48 chapters collectively address both the promise and the challenges of 3MC research.
References
1 Smith, T.W. (2010). The globalization of survey research. In: Survey Methods in Multinational, Multiregional, and Multicultural Contexts (ed. J.A. Harkness, M. Braun, B. Edwards, et al.), 477–484. Hoboken, NJ: Wiley.
2 Jowell, R. (1998). How comparative is comparative research? American Behavioral Scientist 42: 168–177.
3 Harkness, J.A., Braun, M., Edwards, B. et al. (ed.) (2010). Survey Methods in Multinational, Multiregional, and Multicultural Contexts. Hoboken, NJ: Wiley.
4 Harkness, J.A., Villar, A., and Edwards, B. (2010). Translation, adaptation, and design. In: Survey Methods in Multinational, Multiregional, and Multicultural Contexts (ed. J.A. Harkness, M. Braun, B. Edwards, et al.), 117–140. Hoboken, NJ: Wiley.
5 Harkness, J.A., van de Vijver, F.J.R., and Mohler, P.P. (ed.) (2003). Cross‐Cultural Survey Methods. Hoboken, NJ: Wiley.
6 Harkness, J.A. and Schoua‐Glusberg, A. (1998). Questionnaires in translation. In: Cross‐Cultural Survey Equivalence (ed. J.A. Harkness), 87–126. Mannheim: ZUMA.
7 Harkness, J., Pennell, B.‐E., and Schoua‐Glusberg, A. (2004). Survey questionnaire translation and assessment. In: Methods for Testing and Evaluating Survey Questionnaires (ed. S. Presser, J.M. Rothgeb, M.P. Couper, et al.), 453–473. Hoboken, NJ: Wiley.
8 Jowell, R., Roberts, C., Fitzgerald, R., and Eva, G. (2007). Measuring Attitudes Cross‐Nationally: Lessons from the European Social Survey. Los Angeles, CA: Sage.
9 Davidov, E., Schmidt, P., and Billiet, J. (2011). Cross‐Cultural Analysis: Methods and Applications, Second Edition. New York: Routledge.
10 Davidov, E., Cieciuch, J., Meuleman, B. et al. (2015). The comparability of measurements of attitudes toward immigration in the European Social Survey: exact versus approximate measurement equivalence. Public Opinion Quarterly 79: 244–266.
11 Willis, G. (2015). The practice of cross‐cultural cognitive interviewing. Public Opinion Quarterly 79: 359–395.
12 Dalton, R.J. and Welzel, C. (2014). The Civic Culture Transformed: From Allegiant to Assertive Citizens. New York: Cambridge University Press.
13 Inglehart, R. and Welzel, C. (2005). Modernization, Cultural Change and Democracy: The Human Development Sequence. New York: Cambridge University Press.
14 Breen, M.J. (2017). Values and Identities in Europe: Evidence from the European Social Survey. New York: Routledge.
15 Blanchflower, D.G. and Oswald, A.J. (1992). The Wage Curve. Cambridge: MIT Press.
16 Kessler, R.C. and Üstün, T.B. (2008). The WHO World Mental Health Surveys: Global Perspectives on the Epidemiology of Mental Disorders. New York: Cambridge University Press.
17 Harkness, J., Pennell, B.‐E., Villar, A. et al. (2008). Translation procedures and translation assessment in the World Mental Health Survey Initiative. In: The WHO World Mental Health Surveys: Global Perspectives on the Epidemiology of Mental Disorders (ed. R. Kessler and B. Üstün), 91–113. New York: Cambridge University Press.
18 Biemer, P.P. and Lyberg, L. (ed.) (2010). Special issue: total survey error. Public Opinion Quarterly 74 (5): 817–1045.
19 Hofstede, G. (2001). Culture’s Consequences (2). Thousand Oaks, CA: Sage.
20 Schwarz, S. (1992). Universals in the content and structure of values: theoretical advances and empirical tests in 20 countries. In: Advances in Experimental Social Psychology (ed. M.P. Zanna), 1–65. San Diego, CA: Academic Press.
21 Triandis, H.C. (1996). The psychological measurement of cultural syndromes. American Psychologist 51: 407–417.
22 Schwarz, N., Oyserman, D., and Peytcheva, E. (2010). Cognition, communication, and culture: implications for the survey response process. In: Survey Methods in Multinational, Multiregional, and Multicultural Contexts (ed. J.A. Harkness, M. Braun, B. Edwards, et al.), 177–190. Hoboken, NJ: Wiley.
23 Uskul, A.K., Oyserman, D., and Schwarz, N. (2010). Cultural emphasis on honor, modesty, or self‐enhancement: implications for the survey‐response process. In: Survey Methods in Multinational, Multiregional, and Multicultural Contexts (ed. J.A. Harkness, M. Braun, B. Edwards, et al.), 191–201. Hoboken, NJ: Wiley.
24 Jabine, T.B., Straf, M.L., Tanur, J.M., and Tourangeau, R. (1984). Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines. Washington, DC: National Academy Press.
2
Improving Multinational, Multiregional, and Multicultural (3MC) Comparability Using the Total Survey Error (TSE) Paradigm
Tom W. Smith
NORC, University of Chicago, Chicago, IL, USA
2.1 Introduction
Durkheim [1] noted in 1895 that comparative sociology is not a particular branch of sociology; it is sociology itself, in so far as it ceases to be purely descriptive and aspires to account for facts.
Of course, this also applies to the social sciences as a whole. Genov [2] has observed that contemporary sociology stands and falls with its own internationalization… The internationalization of sociology is the unfinished agenda of the sociological classics. It is the task of contemporary and future sociologists.
Likewise for political science, Brady [3] has noted that cross‐national research has produced theoretical insight about political participation, the role of values in economic growth and political action, and many other topics.
Similarly, in economics, a cross‐national approach has become imperative as globalization has restructured labor markets and social networks in fundamental ways [4–6].
As the Working Group on the Outlook for Comparative International Social Science Research [7] has noted, a range of research previously conceived as ‘domestic’, … clearly needs to be reconceptualized in light of recent comparative/international findings.
Fortunately, the social sciences are increasingly recognizing the value of multinational research. At the Social Science Research Council (SSRC)’s 2006 meeting on Fostering International Collaboration in the Social Sciences, Ian Diamond, head of the Economic and Social Research Council (United Kingdom), indicated that social science is a global undertaking and that it has been increasingly so for years
and David Lightfoot of the National Science Foundation noted that a major reason for international collaboration is similar to that for interdisciplinary research, it is one of the most productive ways of making new and innovative connections … [and that] none of the social sciences is essentially national in character…
[8].
Multinational, multiregional, and multicultural (3MC)¹ research thus not only has great promise, but is an absolute necessity to understand contemporary human societies. To be useful, comparative survey research needs to meet high scientific standards of reliability and validity and achieve functional equivalence across surveys. This is challenging because comparative survey research is a large‐scale and complex endeavor that must be well designed and well executed to minimize error and maximize equivalence. This goal can be notably advanced by the application of the total survey error (TSE) paradigm to 3MC survey research.
First, this chapter examines the concept of TSE, including interactions between the error components, its application when multiple surveys are involved, and comparison error across multinational surveys. Second, obtaining functional equivalence and similarity in multinational surveys is discussed. Third, the challenges of doing multinational surveys are considered and how combining traditional approaches for maximizing functional equivalence with the utilization of TSE can minimize comparison error and maximize comparative reliability and validity. Fourth, attention is given to minimizing comparison error in question wordings in general and the availability of online resources for developing and testing items to be used in multinational surveys. Special attention is given to dealing with differences in language, structure, and culture. Fifth, issues relating to evaluating scales designed to measure constructs in comparative survey research are examined. Sixth, the combined use of the multilevel, multisource (MLMS) approach and TSE in multinational surveys is considered. Finally, the importance of documentation is discussed.
2.2 Concept of Total Survey Error
TSE is the sum of all the myriad ways in which survey measurement can go wrong [9]. As Judith Lessler [10, p. 405] notes, it is the difference between its actual (true) value for the full target population and the value estimated from the survey….
Under this definition, TSE only refers to differences between true values and measured values. But as commonly applied, the TSE paradigm is used to cover not only differences between the true and measured values but also differences in true values or for comparing different true values. For example, Groves [11, p. S165] has noted in regard to measurement error arising from the questionnaire
that most current research is examining the effects of question order, structure, and wording and does not purport to investigate the measurement of error properties of questions. Instead, researchers note changes in response distributions associated with the alterations.
The concept of TSE has a long lineage stretching back at least to Deming [12] although the term itself seems to have been first used to describe what is now known as TSE by Brown [13]. It is noteworthy that every major description of TSE from Deming [12], through Hansen et al. [14], Kish [15], Brown [13], Andersen et al. [16], Groves [17], Smith [9, 18, 19], Biemer and Lyberg [20], Alwin [21], Weisberg [22], and to Pennell et al. [23] has produced a different taxonomy with some unique elements. Moreover, as Deming [12] noted about his classification of errors in surveys, the thirteen factors referred to are not always distinguishable and there are other ways of classifying them….
What almost all have in common is (i) distinguishing two types of error: (a) variance or variable error, which is random and has no expected impact on mean values, and (b) bias or systematic error, which is directional and alters mean estimates, and TSE combines these two components; and (ii) classifying error into branching categories in which major categories are subsequently subdivided until presumably all survey error components are separately delineated and covered. The various TSE schemes differ primarily in how detailed the depiction of errors is and in the exact description and placement of certain errors within the overall classification schema. In general, the TSE classifications have become more detailed over time, and general categories of error have been more closely tied to specific, operational components of a survey (e.g. sampling frame, interviewer, questionnaire, postproduction data processing). Figure 2.1 illustrates one model of TSE. It has two error flows from each error type or source, with variance indicated by a solid line and bias by a dashed line. It has 35 components (the rightmost boxes in each flow path).² This model, however, does not delineate all possible subcategories of error components. Many of the terminating boxes can be subdivided even further or organized in alternative, more detailed ways. For example, the box Medium could be subdivided in various ways. As Table 2.1 from Smith and Kim [24] shows, Medium could be broken down further by mode, the use of computers, and the utilization of interviewers. Another example of an alternative formulation is shown in Table 2.2 from Smith [19], which takes the Refusal, Unavailable, and Other boxes under Nonresponse in Figure 2.1 and reorganizes them by level or type of nonresponse into nine categories.
Tree diagram with a box labeled Total survey error branching to Sampling and Nonsampling. Sampling has 3 branches, Frame, Selection, and Statistical. Nonsampling has 2 branches, Observation and Nonobservation.Figure 2.1 Total survey error.
Table 2.1 Typology of surveys by mode and medium.
Source: Smith and Kim [24].
ACASI, audio computer‐assisted self‐interview; AVCASI, audio–video computer‐assisted self‐interview; CAPI, computer‐assisted personal interview; CASI, computer‐assisted self‐interview; CATI, computer‐assisted telephone interview; CSAQ, computerized self‐administered questionnaire; EQ, email questionnaire; IVR, interactive voice response; MQ, mail questionnaire; OQ, online questionnaire; PAPI, paper and pencil interview; SAQ, self‐administered questionnaire; T‐ACASI, telephone audio computer‐assisted self‐interview; TI, telephone interview; VCASI, video computer‐assisted self‐interview.
Table 2.2 Categorizing nonresponse error.
Source: Smith [19].
2.3 TSE Interactions
Interactions are a key component of TSE but have been underexamined in the TSE literature [9, 25, 26]. Discussions of the components of TSE have largely focused on each component separately and in turn. For example, Groves [11, p. S162] examined measurement error from the interviewer, survey questions, respondents, and mode but discussed only the direct effects of these four sources of measurement error but omits mention of their combined effects.
As Groves [11, p. S168] further noted, a problem ignored in most methodological investigations is the existence of relationships among different error sources…. (T)here is little work examining the relationships between different error sources.
This neglect is facilitated by the standard way of illustrating TSE that shows each source of error as an isolated box with a separate flow. This wrongly contributes to the idea that the errors occur independently of one another. Nothing is further from the truth. In fact, there are usually close connections and interactions among the different components of errors. This might be illustrated by drawing lines between different components to indicate their interconnection. This would create a dense web of lines that could visually indicate the numerous and complicated ways in which errors are related to one another. But this would generate such a cluttered presentation that it would not be informative [27]. For further discussion of TSE interactions and how they might be presented, see Ref. [19].
2.4 TSE and Multiple Surveys
Traditionally, TSE has been used to describe the error structure of a single survey. But much of the survey research involves the use of two or more surveys such as in the analysis of time series, longitudinal panels, and comparative studies such as those that are 3MC studies. The TSE perspective can be easily adapted to apply to and improve such multisurvey research [19].
In the case of comparative studies that are the focus here, the TSE paradigm can be utilized in several valuable ways. First, ad hoc it can act as a guide or blueprint for designing studies. As the study is planned, each component of error can be considered with the object of minimizing that error. By using the TSE framework, this assures that all countries are following the same guidelines and dealing with the same issues. This improves both the quality of the data and its comparability. Second, it can be a guide for evaluating error that actually occurred once the surveys have been conducted. One can go through each component and assess the level and comparability of the error structures. This can be both done as part of a post hoc evaluation of just collected primary data and employed well after the data collection as a step in secondary analysis. Third, TSE can set a methodological research agenda for studying error structures in comparative surveys and designing experiments and other analyses to understand and ultimately reduce TSE. Fourth, it extends beyond examining the separate components of error and provides a framework for the combining of the individual error components into their overall sum. Understanding the specific sources of errors and the magnitude and direction of error is essential for improving surveys and reducing TSE, but understanding the overall TSE in existing surveys is necessary for optimizing their analysis. Finally, by considering error as an interaction across surveys, it establishes the basis for a statistical model for the handling of error across surveys. As Figure 2.2 illustrates, each component is measured in each survey (as illustrated by the stacked boxes), and across each component there is the potential interaction in the error structures.
Total survey error: comparison error, with Total survey error branching to Sampling and Nonsampling with Sampling branching to Frame, Selection, etc. and Nonsampling branching to Observation and Nonobservation.Figure 2.2 Total survey error: Comparison error.
2.5 TSE Comparison Error in Multinational Surveys
The interaction of errors across surveys leads to what Weisberg [22] refers to as equivalence problems
or comparability effects
or what has been referred to as comparison error
[19]. One can think of such comparison error as occurring both for each component and in the aggregate across all components. For example, errors due to mistranslations are comparison errors that are interactions between the question wording components of each study. The TSE paradigm indicates that one needs to consider all the many components of comparison error across surveys including both the individual comparison errors from each component and the cumulative comparison error across all components.
Ideally, one seeks no error in surveys. That of course is not possible since certain errors such as sampling variance will exist in any sample survey and because most other types of errors cannot be totally eliminated. Next, one would want error that is minimized and similar across surveys. One would want random error to be reduced in size and similar in magnitude and direction across surveys. If there is systematic error, one would want it to be similar across surveys. For example, most surveys in most countries underrepresent men. If men are underrepresented to the same degree across surveys, then that bias is not contributing to comparison error across surveys. More problematic are studies in which error is minimized but different across surveys. In the gender example, this would include a case in which men were slightly overrepresented in one survey and slightly underrepresented in another. Some of the observed differences across surveys would be a methodological artifact of these opposite error structures. Perhaps equally problematic would be the case in which errors were not minimized but were comparable in magnitude and direction. In this case each survey is less reliable and accurate than in the minimized case, but the comparison error is not increased because of the similarity of error structures. The most problematic case is when error is not minimized and the errors are not similar across surveys. This is like the men overrepresented versus underrepresented example mentioned above, but the magnitude of the comparison error is greater because the opposite‐direction gender biases are larger.
TSE can be used to minimize error in individual surveys and minimize comparison error across surveys. The latter goal will often mean that comparability may drive design
[28]. For example, taking question wording as an example, TSE can be used first to improve country‐specific questions and then further to optimize questions comparatively and thus minimize comparison error [29]. Consider a fourfold table in which questions are either good (e.g. reliable, valid, clear) or poor and either well translated or poorly or wrongly translated. Only the combination of good and well‐translated questions is satisfactory for multinational survey research. Poor but well‐translated items, good but poorly translated items, and of course poor and poorly translated items are not useful. To write better initial questions, there are many well‐established strictures and guidelines that can and should be applied such as Gricean maxims of conversation [30], the Tourangeau and Rasinski [93, 94] model of the response process to survey questions, and standard item development techniques such as general and cognitive pretesting [31–33].
Comparison error is especially likely in studies involving a large number of countries and societies that are very different from one another (e.g. varying greatly on languages, structures, cultures). More countries mean a larger number of components (e.g. research teams, field staffs, translations) that must be planned and coordinated. The larger number also means that the goal of achieving functional equivalence across all countries is harder since more bilateral comparisons must be optimized and steps to make two countries more similar will often draw one or both of the countries away from still other societies. Of course Figure 2.2 illustrates only the simplest of 3MC situations, one with just two surveys. The stacked boxes would increase to equal the number of surveys employed (i.e. the number of countries/cultures covered). Moreover, the number of comparison errors expands to an even greater extent. With two surveys there is one comparison per box. With five surveys there would be 10 bilateral comparisons, and for 10 surveys there would be 45 bilateral comparisons per box. Multiply that by the 35 boxes, and the number of bilateral comparisons increases to 1575. If interactions are considered, tens of thousands of comparisons are generated. Likewise, the greater dissimilarities across countries in language, structure, and culture in turn mean that developing equally relevant, reliable, and valid items is more challenging. When major differences occur on all three of these broad dimensions, it is difficult to focus on each element both because there is so much that needs to be carefully considered and because the elements will interact with one another.
The aim of minimizing error in general and comparison error in particular in both the study design and its execution does not mean that procedures need to be identical. Similar results can be achieved through different means. For example, having 100% valid interviews would be the goal of most surveys. This objective can be achieved through various case‐verification procedures. In face‐to‐face surveys in the United States, the usual practice is to randomly recontact a portion of each interviewer’s cases and confirm that an interview has taken place. In other countries, especially in resource poor countries, interviewers are often sent out in teams with a supervisor accompanying the cadre of interviewers and confirming their work as it occurs. In Germany the Allensbach Institute has not wanted to record the name and contact information of respondents, so verification reinterviews had not been a possibility. It instead developed special techniques to internally validate interviews. One technique was to have a factual question asking about some obscure matter that almost no one would know and then at a later point in the interview include a second question that in effect supplied the correct answer to the difficult knowledge item. In a real interview respondents would receive the tip too late to assist them in answering the knowledge item. But an interviewer making up interviews would be aware of the correct answer and would presumably sometimes use that to give a correct response to the knowledge item.
Additionally, new validation techniques have been developed for computer‐assisted personal interviewing. One technique uses time stamps on the laptops to identify interviews being done much faster than average and/or too close in time between interviews [34]. Another procedure uses computer audio‐recorded interviewing (CARI) [35]. CARI is used for various substantive reasons, and it can also be used to monitor interviewers and to validate that an interview with a respondent is actually being conducted. CARI, however, cannot readily verify that the interview was conducted with the correct respondent. Also Blasius and Thiessen [34] have developed a series of analytical screening methods to detect faked data.
As the above examples attest, validation procedures can vary notably across organizations and surveys. This variation is not problematic to the extent that the same outcome of eliminating faked interviews is achieved.³ But if some techniques are less effective than others, then comparison error will occur in part because of these differences. Also one does not want to permit legitimate, even necessary variation, to slip into becoming unnecessary and often harmful deviance. Sometimes the multinational differences are due to just the application of usual, customary practices, and these may be neither locally optimal nor best to further comparability. A balance is needed between the undesirable poles of rigid standardization and disruptive, uncoordinated variation.
If the study design features are equivalent and procedures are successfully implemented, one might expect component errors to be similar and thus for TSE to be on a par across surveys. While this is often a plausible expectation, it cannot be taken as guaranteed. True variation can interact with measurement error to create comparison error. The sensitivity of topics and questions often varies across societies [37]. For example, asking about drinking alcohol is not an especially sensitive topic in most European societies, but would be so in conservative Muslim countries. As a result, social desirability bias concerning alcohol consumption would likely be much greater in the latter than the former. Similarly, acquiescence bias appears to vary across countries [38, 39].
2.6 Components of TSE and Comparison Error
The TSE approach emphasizes the many components of error that need to be considered and how the cumulative or total of all of these sources of error needs to be assessed. Likewise, comparison error needs to be examined across all of the components and its total impact evaluated. Many components of TSE have been shown to be important in establishing (or conversely in undermining) functional equivalence in multinational survey research, as discussed below [40]. For example, several studies have shown that undercoverage and sample bias have been major contributors distorting international student testing scores [41–43]. Other studies have shown the impact of differences in mode [44, 45]; interviewer recruitment, training, and supervision [46]; variations in hard‐to‐survey populations [47]; and nonresponse rates [48, 49].
Comparison error involving question wording is probably the single largest challenge in multinational survey research, involving straightforward translation issues and the even more complex issues involving structural and cultural factors. For that reason, it has been the main focus of methodological research in multinational survey research literature, and other error components have often been neglected. The TSE perspective makes it clear that all sources of error need to be closely examined.
2.7 Obtaining Functional Equivalence and Similarity in Comparative Surveys
Two or more surveys in two or more countries by their very nature cannot be identical or exactly the same. The target populations always differ, and differences relating both to conducting the surveys (e.g. sampling frames, field staffs, interview training, survey climate) and to the societies in general (language, structure, culture) are complex and substantial. Typically, the object has been to maximize comparability or functional equivalence. What this means, however, is often unclear. Johnson [50] identified 52 different types of equivalence
in multinational survey research, and he did not even search for uses of alternative terms such as comparability.
Johnson describes functional equivalence as falling under the general category of interpretive equivalence that he characterizes as involving equivalence of meaning
and elaborates that functional equivalence as being universal in a qualitative, although not quantitative, sense.
Johnson [50] further describes concordance of meaning
as central to the concept of functional equivalence. At the item level, it indicates that across surveys questions would be understood in a similar manner and both operate as a similar stimulus and capture answers with similar response options. Mohler and Johnson [51] have argued that equivalence or identity (identicality) are ideal concepts
and unattainable. They favor two alternative terms: comparability
to indicate the closeness of concepts and similarity
to describe how alike are measurement components – constructs, indicators, and items…
and as the degree of overlap measures have in their representation of a given social construct….
However, as used here, functional equivalence does not indicate identicality but the goal of striving to achieve as close a similarity as practical across comparative surveys at both the item and scale levels. It first considers the item‐level functional equivalence across matched pairs of questions and then the scale‐level functional equivalence across batteries of items. Item‐level equivalence is obviously essential for comparison between single measures. Single items are usually used for most demographics and many behaviors. While there are very limited possibilities for testing functional equivalence quantitatively by comparing single items since their distributions are a function of a varying and undetectable degree of substantive variation and measurement error, one can examine relationships with other variables to see if the items are performing as expected.
Item‐level functional equivalence is also a good foundation for building functionally equivalent scales. Most attitudinal analysis depends on the use of multi‐item scales, and these are needed even more in multinational research than they are in monocultural research. The extra complexity and intersurvey variability of 3MC studies typically requires more measures and elaborate designs. Smith [52] has indicated as a rule of thumb that one needs three times as many indicators in a multinational survey to make a scale or measure a construct as reliable and valid as for a single society.
In multinational survey research the individual surveys need to be well designed and well executed and need to be designed to minimize comparison error. Applying the TSE perspective greatly facilitates reaching these goals. From the design and execution perspective, the goal is to have surveys designed with similar features (e.g. target population, content, interviewer training) and carried out to a similar (and hopefully high) level of attainment. That is, they need to be designed to do the same thing, and those intentions need to be successfully achieved. Similar designs and procedures alone are not enough, however,