Preface xiAbout the Authors xv1 Introduction 11.1 What Is a PRO Measure? 11.2 Development of a PRO Measure 41.2.1 Concept Identification 41.2.1.1 Literature and Instrument Review 51.2.1.2 Patient- Centered Input 61.2.2 Item Development 91.2.3 Cognitive Interviews 111.2.4 Additional Considerations 121.2.5 Documentation of Development Process with Conceptual Framework 131.3 Psychometric Validation 151.3.1 Psychometric Evaluation Data 161.3.2 Psychometric Properties 171.3.2.1 Distributional Characteristics 191.3.2.2 Measurement Model Structure 201.3.2.3 Reliability 221.3.2.4 Construct Validity 231.3.2.5 Ability to Detect Change 241.3.2.6 Interpretation 251.4 Learning Through Simulations 261.5 Summary 27References 282 Validation Workflow 352.1 Clinical Trials as a Data Source for Validation 352.2 Validation Workflow for Single- Item Scales 392.3 Confirmatory Validation Workflow for Multi- item Multi- domain Scales 432.4 Validation Flow for a New Multi- item Multi- domain Scale 452.4.1 New Scale with Known Conceptual Framework 452.4.2 New Scale with Unknown Measurement Structure 472.5 Cross- Sectional Studies and Field Tests 482.6 Summary 49References 493 An Assessment of Classical Test Theory and Item Response Theory 513.1 Overview of Classical Test Theory 523.1.1 Basics 523.1.2 Illustration 523.1.3 Another Look 533.2 Person- Item Maps 553.2.1 CTT Revisited 553.2.2 Note on IRT 563.2.3 Implementation of Person- Item Maps 583.2.4 CTT- Based Scoring vs. IRT- Based Scoring 693.3 Summary 78References 804 Reliability 834.1 Reproducibility/Test-Retest 854.1.1 Measurement Error Model 854.1.2 Two Time Points 874.1.3 Random- Effects Model for ICC Estimation 904.1.4 Test-Retest Reliability Assessment in the Context of Clinical Studies 954.1.4.1 Pre- Treatment/Pre- Baseline Data 954.1.4.2 Post- Baseline Data 974.1.4.3 Time Period Between Observations 1014.1.5 Spearman- Brown Prophecy Formula 1044.1.6 Domain Score Test-Retest vs. Item Test-Retest 1094.1.7 Observer- Based and Interviewer- Based Scales 1114.1.8 Uncovering True Relationship Between Measurements 1134.1.8.1 Accounting for Measurement Error 1134.1.8.2 Measurement Error Model with Two Observations 1224.2 Cronbach's Alpha 1294.2.1 Likert- Type Scales 1294.2.2 Dichotomous Items 1394.3 Summary 148References 1485 Construct Validity and Criterion Validity 1515.1 Exploratory Factor Analyses 1535.1.1 Modeling Assumptions 1535.1.2 Exploratory Factor Analysis Implementation 1595.1.3 Evaluating the Number of Factors and Factor Loadings 1655.1.3.1 Scree Plot 1655.1.3.2 Correlated Latent Factors 1685.1.3.3 Parallel Analysis with Reduced Correlation Matrix 1715.1.3.4 Factor Loadings 1755.2 Confirmatory Factor Analyses 1795.2.1 Confirmatory Factor Analysis Model 1795.2.2 Confirmatory Factor Analysis Model Implementation 1835.2.3 Confirmatory Factor Analysis with Domains Represented by a Single Item 1925.2.4 Second- Order Confirmatory Factor Analysis 2045.2.4.1 Implementation of the Model with at Least Three First- Order Latent Domains 2045.2.4.2 Implementation of the Model with Two First- Order Latent Domains 2075.2.5 Formative vs. Reflective Model 2135.2.6 Bifactor Model 2195.2.7 Confirmatory Factor Analysis Using Polychoric Correlations 2275.3 Convergent and Discriminant Validity 2315.3.1 Convergent and Discriminant Validity Assessment 2315.3.2 Convergent and Discriminant Validity Evaluation in a Clinical Study 2325.4 Known- Groups Validity 2375.5 Criterion Validity 2425.6 Summary 247References 2486 Responsiveness and Sensitivity 2516.1 Ability to Detect Change 2526.1.1 Definitions and Concepts 2526.1.2 Ability to Detect Change Analysis Implementation 2556.1.3 Correlation Analysis to Support Ability to Detect Change 2636.1.4 Deconstructing Correlation Between Changes 2686.2 Sensitivity to Treatment 2706.2.1 What Is the Sensitivity to Treatment? 2706.2.2 Concurrent Estimation of the Treatment Effects for a Multi- Domain Scale 2736.2.2.1 Assessment of the Treatment Effect for a Single Domain 2736.2.2.2 Assessment of the Treatment Effects for a Multi- Domain Scale 2796.3 Summary 292References 2937 Interpretation of Patient- Reported Outcome Findings 2957.1 Meaningful Within- Patient Change 2967.1.1 Definitions and Concepts 2967.1.2 Anchor- Based Method to Assess Meaningful Within- Patient Change 2987.1.3 Cumulative Distribution Functions to Supplement Anchor- Based Methods 3107.2 Clinical Important Difference 3157.2.1 Meaningful Within- Patient Change Versus Between- Group Difference 3157.2.2 Anchor- Based Method to Assess Clinically Important Difference 3167.3 Responder Analyses and Cumulative Distribution Functions 3207.3.1 Treatment Effect Model 3207.3.2 MWPC Application: A Responder Analysis 3237.3.3 Using CDFs for Interpretation of Results 3257.4 Summary 331References 332Index 335
ANDREW G. BUSHMAKIN earned his M.S. in applied mathematics and physics from the National Research Nuclear University (former Moscow Engineering Physics Institute, Moscow, Russia). He has more than 20 years of experience in mathematical modeling and data analysis. He is a director of biostatistics in the Statistical Research and Data Science Center at Pfizer Inc. He has co-authored numerous articles and presentations on topics ranging from mathematical modeling of neutron physics processes to patient-reported outcomes, as well as several monographs.JOSEPH C. CAPPELLERI earned his M.S. in statistics from the City University of New York, Ph.D. in psychometrics from Cornell University, and M.P.H. in epidemiology from Harvard University. He is an Executive Director of biostatistics in the Statistical Research and Data Science Center at Pfizer Inc. As an adjunct professor, he has served on the faculties of Brown University, University of Connecticut, and Tufts Medical Center. He has delivered numerous conference presentations and has published extensively on clinical and methodological topics. He is a fellow of the American Statistical Association and recipient of the ISPOR Avedis Donabedian Outcomes Research Lifetime Achievement Award.