ISBN-13: 9781119083740 / Angielski / Twarda / 2019 / 544 str.
ISBN-13: 9781119083740 / Angielski / Twarda / 2019 / 544 str.
List of Contributors xixPreface by Dr. Judith Tanur xxvAbout the Companion Website xxix1 Probability Survey-Based Experimentation and the Balancing of Internal and External Validity Concerns 1Paul J. Lavrakas, Courtney Kennedy, Edith D. de Leeuw, Brady T. West, Allyson L. Holbrook, and Michael W. Traugott1.1 Validity Concerns in Survey Research 31.2 Survey Validity and Survey Error 51.3 Internal Validity 61.4 Threats to Internal Validity 81.5 External Validity 111.6 Pairing Experimental Designs with Probability Sampling 121.7 Some Thoughts on Conducting Experiments with Online Convenience Samples 121.8 The Contents of this Book 15References 15Part I Introduction to Section on Within-Unit Coverage 19Paul J. Lavrakas and Edith D. de Leeuw2 Within-Household Selection Methods: A Critical Review and Experimental Examination 23Jolene D. Smyth, Kristen Olson, and Mathew Stange2.1 Introduction 232.2 Within-Household Selection and Total Survey Error 242.3 Types of within-Household Selection Techniques 242.4 Within-Household Selection in Telephone Surveys 252.5 Within-Household Selection in Self-Administered Surveys 262.6 Methodological Requirements of Experimentally Studying Within-Household Selection Methods 272.7 Empirical Example 302.8 Data and Methods 312.9 Analysis Plan 342.10 Results 352.11 Discussion and Conclusions 40References 423 Measuring within-Household Contamination: The Challenge of Interviewing More Than One Member of a Household 47Colm O'Muircheartaigh, Stephen Smith, and Jaclyn S.Wong3.1 Literature Review 473.2 Data and Methods 50Investigators 53Field/Project Directors 533.3 The Sequence of Analyses 553.4 Results 553.5 Effect on Standard Errors of the Estimates 573.6 Effect on Response Rates 583.7 Effect on Responses 613.8 Substantive Results 64References 64Part II Survey Experiments with Techniques to Reduce Nonresponse 67Edith D. de Leeuw and Paul J. Lavrakas4 Survey Experiments on Interactions and Nonresponse: A Case Study of Incentives and Modes 69A. Bianchi and S. Biffignandi4.1 Introduction 694.2 Literature Overview 704.3 Case Study: Examining the Interaction between Incentives and Mode 734.4 Concluding Remarks 83Acknowledgments 85References 865 Experiments on the Effects of Advance Letters in Surveys 89Susanne Vogl, Jennifer A. Parsons, Linda K. Owens, and Paul J. Lavrakas5.1 Introduction 895.2 State of the Art on Experimentation on the Effect of Advance Letters 935.3 Case Studies: Experimental Research on the Effect of Advance Letters 955.4 Case Study I: Violence against Men in Intimate Relationships 965.5 Case Study II: The Neighborhood Crime and Justice Study 1005.6 Discussion 1065.7 Research Agenda for the Future 107References 108Part III Overview of the Section on the Questionnaire 111Allyson Holbrook and Michael W. Traugott6 Experiments on the Design and Evaluation of Complex Survey Questions 113Paul Beatty, Carol Cosenza, and Floyd J. Fowler Jr.6.1 Question Construction: Dangling Qualifiers 1156.2 Overall Meanings of Question Can Be Obscured by Detailed Words 1176.3 Are Two Questions Better than One? 1196.4 The Use of Multiple Questions to Simplify Response Judgments 1216.5 The Effect of Context or Framing on Answers 1226.6 Do Questionnaire Effects Vary Across Sub-groups of Respondents? 1246.7 Discussion 126References 1287 Impact of Response Scale Features on Survey Responses to Behavioral Questions 131Florian Keusch and Ting Yan7.1 Introduction 1317.2 Previous Work on Scale Design Features 1327.3 Methods 1347.4 Results 1367.5 Discussion 141Acknowledgment 1437.A Question Wording 1437.A.1 Experimental Questions (One Question Per Screen) 1437.A.2 Validation Questions (One Per Screen) 1447.A.3 GfK Profile Questions (Not Part of the Questionnaire) 1457.B Test of Interaction Effects 145References 1468 Mode Effects Versus Question Format Effects: An Experimental Investigation of Measurement Error Implemented in a Probability-Based Online Panel 151Edith D. de Leeuw, Joop Hox, and Annette Scherpenzeel8.1 Introduction 1518.2 Experiments and Probability-Based Online Panels 1538.3 Mixed-Mode Question Format Experiments 1548.4 Summary and Discussion 161Acknowledgments 162References 1629 Conflicting Cues: Item Nonresponse and Experimental Mortality 167David J. Ciuk and Berwood A. Yost9.1 Introduction 1679.2 Survey Experiments and Item Nonresponse 1679.3 Case Study: Conflicting Cues and Item Nonresponse 1709.4 Methods 1709.5 Issue Selection 1719.6 Experimental Conditions and Measures 1729.7 Results 1739.8 Addressing Item Nonresponse in Survey Experiments 1749.9 Summary 178References 17910 Application of a List Experiment at the Population Level: The Case of Opposition to Immigration in the Netherlands 181Mathew J. Creighton, Philip S. Brenner, Peter Schmidt, and Diana Zavala-Rojas10.1 Fielding the Item Count Technique (ICT) 18310.2 Analyzing the Item Count Technique (ICT) 18510.3 An Application of ICT: Attitudes toward Immigrants in the Netherlands 18610.4 Limitations of ICT 190References 192Part IV Introduction to Section on Interviewers 195Brady T. West and Edith D. de Leeuw11 Race- and Ethnicity-of-Interviewer Effects 197Allyson L. Holbrook, Timothy P. Johnson, and Maria Krysan11.1 Introduction 19711.2 The Current Research 20511.3 Respondents and Procedures 20711.4 Measures 20711.5 Analysis 21011.6 Results 21111.7 Discussion and Conclusion 219References 22112 Investigating Interviewer Effects and Confounds in Survey-Based Experimentation 225Paul J. Lavrakas, Jenny Kelly, and Colleen McClain12.1 Studying Interviewer Effects Using a Post hoc Experimental Design 22612.2 Studying Interviewer Effects Using A Priori Experimental Designs 23012.3 An Original Experiment on the Effects of Interviewers Administering Only One Treatment vs. Interviewers Administrating Multiple Treatments 23212.4 Discussion 239References 242Part V Introduction to Section on Adaptive Design 245Courtney Kennedy and Brady T. West13 Using Experiments to Assess Interactive Feedback That Improves Response Quality in Web Surveys 247Tanja Kunz and Marek Fuchs13.1 Introduction 24713.2 Case Studies - Interactive Feedback in Web Surveys 25113.3 Methodological Issues in Experimental Visual Design Studies 258References 26914 Randomized Experiments for Web-Mail Surveys Conducted Using Address-Based Samples of the General Population 275Z. Tuba Suzer-Gurtekin, Mahmoud Elkasabi, James M. Lepkowski, Mingnan Liu, and Richard Curtin14.1 Introduction 27514.2 Study Design and Methods 27814.3 Results 28114.4 Discussion 285References 287Part VI Introduction to Section on Special Surveys 291Michael W. Traugott and Edith D. de Leeuw15 Mounting Multiple Experiments on Longitudinal Social Surveys: Design and Implementation Considerations 293Peter Lynn and Annette Jäckle15.1 Introduction and Overview 29315.2 Types of Experiments that Can Be Mounted in a Longitudinal Survey 29415.3 Longitudinal Experiments and Experiments in Longitudinal Surveys 29515.4 Longitudinal Surveys that Serve as Platforms for Experimentation 29615.5 The Understanding Society Innovation Panel 29815.6 Avoiding Confounding of Experiments 29915.7 Allocation Procedures 30115.8 Refreshment Samples 30415.9 Discussion 30515.A Appendix: Stata Syntax to Produce Table 15.3 Treatment Allocations 306References 30616 Obstacles and Opportunities for Experiments in Establishment Surveys Supporting Official Statistics 309Diane K. Willimack and Jaki S. McCarthy16.1 Introduction 30916.2 Some Key Differences between Household and Establishment Surveys 31016.3 Existing Literature Featuring Establishment Survey Experiments 31216.4 Key Considerations for Experimentation in Establishment Surveys 31416.5 Examples of Experimentation in Establishment Surveys 31816.6 Discussion and Concluding Remarks 323Acknowledgments 324References 324Part VII Introduction to Section on Trend Data 327Michael W. Traugott and Paul J. Lavrakas17 Tracking Question-Wording Experiments across Time in the General Social Survey, 1984-2014 329Tom W. Smith and Jaesok Son17.1 Introduction 32917.2 GSS Question-Wording Experiment on Spending Priorities 33017.3 Experimental Analysis 33017.4 Summary and Conclusion 33817.A National Spending Priority Items 339References 34018 Survey Experiments and Changes in Question Wording in Repeated Cross-Sectional Surveys 343Allyson L. Holbrook, David Sterrett, Andrew W. Crosby, Marina Stavrakantonaki, Xiaoheng Wang, Tianshu Zhao, and Timothy P. Johnson18.1 Introduction 34318.2 Background 34418.3 Two Case Studies 34718.4 Implications and Conclusions 362Acknowledgments 364References 364Part VIII Vignette Experiments in Surveys 369Allyson Holbrook and Paul J. Lavrakas19 Are Factorial Survey Experiments Prone to Survey Mode Effects? 371Katrin Auspurg, Thomas Hinz, and Sandra Walzenbach19.1 Introduction 37119.2 Idea and Scope of Factorial Survey Experiments 37219.3 Mode Effects 37319.4 Case Study 37819.5 Conclusion 388References 39020 Validity Aspects of Vignette Experiments: Expected "What-If" Differences between Reports of Behavioral Intentions and Actual Behavior 393Stefanie Eifler and Knut Petzold20.1 Outline of the Problem 39320.2 Research Findings from Our Experimental Work 39920.3 Discussion 411References 413Part IX Introduction to Section on Analysis 417Brady T. West and Courtney Kennedy21 Identities and Intersectionality: A Case for Purposive Sampling in Survey-Experimental Research 419Samara Klar and Thomas J. Leeper21.1 Introduction 41921.2 Common Techniques for Survey Experiments on Identity 42021.3 How Limited are Representative Samples for Intersectionality Research? 42621.4 Conclusions and Discussion 430Author Biographies 431References 43122 Designing Probability Samples to Study Treatment Effect Heterogeneity 435Elizabeth Tipton, David S. Yeager, Ronaldo Iachan, and Barbara Schneider22.1 Introduction 43522.2 Nesting a Randomized Treatment in a National Probability Sample: The NSLM 44622.3 Discussion and Conclusions 451Acknowledgments 453References 45323 Design-Based Analysis of Experiments Embedded in Probability Samples 457Jan A. van den Brakel23.1 Introduction 45723.2 Design of Embedded Experiments 45823.3 Design-Based Inference for Embedded Experiments with One Treatment Factor 46023.4 Analysis of Experiments with Clusters of Sampling Units as Experimental Units 46623.5 Factorial Designs 46823.6 A Mixed-Mode Experiment in the Dutch Crime Victimization Survey 47223.7 Discussion 477Acknowledgments 478References 47824 Extending the Within-Persons Experimental Design: The Multitrait-Multierror (MTME) Approach 481Alexandru Cernat and Daniel L. Oberski24.1 Introduction 48124.2 The Multitrait-Multierror (MTME) Framework 48224.3 Designing the MTME Experiment 48724.4 Statistical Estimation for the MTME Approach 48924.5 Measurement Error in Attitudes toward Migrants in the UK 49124.6 Results 49424.7 Conclusions and Future Research Directions 497Acknowledgments 498References 498Index 501
Paul J. Lavrakas, PhD, is Senior Fellow at the NORC at the University of Chicago, Adjunct Professor at University of Illinois-Chicago, Senior Methodologist at the Social Research Centre of Australian National University and at the Office for Survey Research at Michigan State University.Michael W. Traugott, PhD, is Research Professor in the Institute for Social Research at the University of Michigan.Courtney Kennedy, PhD, is Director of Survey Research at Pew Research Center in Washington, DC.Allyson L. Holbrook, PhD, is Professor of Public Administration and Psychology at the University of Illinois-Chicago.Edith D. de Leeuw, PhD, is Professor of Survey Methodology in the Department of Methodology and Statistics at Utrecht University.Brady T. West, PhD, is Research Associate Professor in the Survey Research Center at the University of Michigan-Ann Arbor.
1997-2025 DolnySlask.com Agencja Internetowa