Online panel research : a data quality perspective /

Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data. This edited volume is one of the first att...

Full description

Saved in:
Bibliographic Details
Other Authors: Callegaro, Mario (Editor)
Format: Electronic eBook
Language:English
Published: Chichester, West Sussex ; Hoboken, NJ : Wiley, 2014.
Series:Wiley series in survey methodology.
Subjects:
Online Access:CONNECT

MARC

LEADER 00000nam a22000001i 4500
001 in00006112741
006 m |o d |
007 cr |n|||||||||
008 240102s2014 enk ob 001 0 eng d
005 20240124153944.7
020 |a 9781118763520 
020 |a 1118763521 (electronic bk.) 
020 |z 9781119941774 
020 |z 1119941776 
024 8 |a 40023769895 
035 |a (NhCcYBP)e80fa363d91f485b9a695405a0a7a4bb9781118763520 
035 |a 1wileyeba9781118763520 
040 |a NhCcYBP  |b eng  |e rda  |c NhCcYBP 
050 4 |a H61.26  |b .O55 2014 
082 0 4 |a 001.4/33  |2 23 
245 0 0 |a Online panel research :  |b a data quality perspective /  |c editors by Mario Callegaro, Survey Research Scientist, Quantitative Marketing Team, Google UK, Reg Baker, Senior Consultant, Market Strategies International, USA, Jelke Bethlehem, Anja S. Goritz, Professor of Occupational and Consumer Psychology, University of Freiburg, Germany, Jon A. Krosnick, Professor of Political Science, Communication and Psychology, Stanford University, USA, Paul J. Lavrakas, Independent Research Psychologist/Research Methodologist, USA. 
264 1 |a Chichester, West Sussex ;  |a Hoboken, NJ :  |b Wiley,  |c 2014. 
264 4 |c ©2014 
300 |a 1 online resource (xxix, 477 pages.) 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
490 1 |a Wiley series in survey methodology 
500 |a Wiley EBA  |5 TMurS 
504 |a Includes bibliographical references and index. 
520 |a Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data. This edited volume is one of the first attempts to carefully examine the quality of the survey data being generated by online samples. It describes some of the best empirically-based research on what has become a very important yet controversial method of collecting data. Online Panel Research presents 19 chapters of previously unpublished work addressing a wide range of topics, including coverage bias, nonresponse, measurement error, adjustment techniques, the relationship between nonresponse and measurement error, impact of smartphone adoption on data collection, Internet rating panels, and operational issues. The datasets used to prepare the analyses reported in the chapters are available on the accompanying website: www.wiley.com/go/online_panel. Covers controversial topics such as professional respondents, speeders, and respondent validation. Addresses cutting-edge topics such as the challenge of smartphone survey completion, software to manage online panels, and Internet and mobile ratings panels. Discusses and provides examples of comparison studies between online panels and other surveys or benchmarks. Describes adjustment techniques to improve sample representativeness. Addresses coverage, nonresponse, attrition, and the relationship between nonresponse and measurement error with examples using data from the United States and Europe. Addresses practical questions such as motivations for joining an online panel and best practices for managing communications with panelists. Presents a meta-analysis of determinants of response quantity. Features contributions from 50 international authors with a wide variety of backgrounds and expertise. This book will be an invaluable resource for opinion and market researchers, academic researchers relying on web-based data collection, governmental researchers, statisticians, psychologists, sociologists, and other research practitioners. 
505 0 0 |g 1.  |t Online panel research: History, concepts, applications and a look at the future /  |r Paul J. Lavrakas --  |g 1.1.  |t Introduction --  |g 1.2.  |t Internet penetration and online panels --  |g 1.3.  |t Definitions and terminology --  |g 1.3.1.  |t Types of online panels --  |g 1.3.2.  |t Panel composition --  |g 1.4.  |t A brief history of online panels --  |g 1.4.1.  |t Early days of online panels --  |g 1.4.2.  |t Consolidation of online panels --  |g 1.4.3.  |t River sampling --  |g 1.5.  |t Development and maintenance of online panels --  |g 1.5.1.  |t Recruiting --  |g 1.5.2.  |t Nonprobability panels --  |g 1.5.3.  |t Probability-based panels --  |g 1.5.4.  |t Invitation-only panels --  |g 1.5.5.  |t Joining the panel --  |g 1.5.6.  |t Profile stage --  |g 1.5.7.  |t Incentives --  |g 1.5.8.  |t Panel attrition, maintenance, and the concept of active panel membership --  |g 1.5.9.  |t Sampling for specific studies --  |g 1.5.10.  |t Adjustments to improve representativeness --  |g 1.6.  |t Types of studies for which online panels are used --  |g 1.7.  |t Industry standards, professional associations' guidelines and advisory groups --  |g 1.8.  |t Data quality issues --  |g 1.9.  |t Looking ahead to the future of online panels --  |t References --  |g 2.  |t A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples /  |r Jon A. Krosnick --  |g 2.1.  |t Introduction --  |g 2.2.  |t Taxonomy of comparison studies --  |g 2.3.  |t Accuracy metrics --  |g 2.4.  |t Large-scale experiments on point estimates --  |g 2.4.1.  |t The NOPVO project --  |g 2.4.2.  |t The ARF study --  |g 2.4.3.  |t The Burke study --  |g 2.4.4.  |t The MRIA study --  |g 2.4.5.  |t The Stanford studies --  |g 2.4.6.  |t Summary of the largest-scale experiments --  |g 2.4.7.  |t The Canadian Newspaper Audience Databank (NADbank) experience --  |g 2.4.8.  |t Conclusions for the largest comparison studies on point estimates --  |g 2.5.  |t Weighting adjustments --  |g 2.6.  |t Predictive relationship studies --  |g 2.6.1.  |t The Harris-Interactive, Knowledge Networks study --  |g 2.6.2.  |t The BES study --  |g 2.6.3.  |t The ANES study --  |g 2.6.4.  |t The US Census study --  |g 2.7.  |t Experiment replicability studies --  |g 2.7.1.  |t Theoretical issues in the replication of experiments across sample types --  |g 2.7.2.  |t Evidence and future research needed on the replication of experiments in probability and nonprobability samples --  |g 2.8.  |t The special case of pre-election polls --  |g 2.9.  |t Completion rates and accuracy --  |g 2.10.  |t Multiple panel membership --  |g 2.10.1.  |t Effects of multiple panel membership on survey estimates and data quality --  |g 2.10.2.  |t Effects of number of surveys completed on survey estimates and survey quality --  |g 2.11.  |t Online panel studies when the offline population is less of a concern --  |g 2.12.  |t Life of an online panel member --  |g 2.13.  |t Summary and conclusion --  |t References --  |g pt. I  |t COVERAGE --  |t Introduction to Part I /  |r Jon A. Krosnick --  |g 3.  |t Assessing representativeness of a probability-based online panel in Germany /  |r Wolfgang Bandilla --  |g 3.1.  |t Probability-based online panels --  |g 3.2.  |t Description of the GESIS Online Panel Pilot --  |g 3.2.1.  |t Goals and general information --  |g 3.2.2.  |t Telephone recruitment --  |g 3.2.3.  |t Online interviewing --  |g 3.3.  |t Assessing recruitment of the Online Panel Pilot --  |g 3.4.  |t Assessing data quality: Comparison with external data --  |g 3.4.1.  |t Description of the benchmark surveys --  |g 3.4.2.  |t Measures and method of analyses --  |g 3.5.  |t Results --  |g 3.5.1.  |t Demographic variables --  |g 3.5.2.  |t Attitudinal variables --  |g 3.5.3.  |t Comparison of the GESIS Online Panel Pilot to ALLBUS with post-stratification --  |g 3.5.4.  |t Additional analysis: Regression --  |g 3.5.5.  |t Replication with all observations with missing values dropped --  |g 3.6.  |t Discussion and conclusion --  |t References --  |t Appendix 3.A --  |g 4.  |t Online panels and validity: Representativeness and attrition in the Finnish eOpinion panel /  |r Kim Strandberg --  |g 4.1.  |t Introduction --  |g 4.2.  |t Online panels: Overview of methodological considerations --  |g 4.3.  |t Design and research questions --  |g 4.4.  |t Data and methods --  |g 4.4.1.  |t Sampling --  |g 4.4.2.  |t E-Panel data collection --  |g 4.5.  |t Findings --  |g 4.5.1.  |t Socio-demographics --  |g 4.5.2.  |t Attitudes and behavior --  |g 4.5.3.  |t Use of the Internet and media --  |g 4.6.  |t Conclusion --  |t References --  |g 5.  |t The untold story of multi-mode (online and mail) consumer panels: From optimal recruitment to retention and attrition /  |r Olena Kaminska --  |g 5.1.  |t Introduction --  |g 5.2.  |t Literature review --  |g 5.3.  |t Methods --  |g 5.3.1.  |t Gallup Panel recruitment experiment --  |g 5.3.2.  |t Panel survey mode assignment --  |g 5.3.3.  |t Covariate measures used in this study --  |g 5.3.4.  |t Sample composition --  |g 5.4.  |t Results --  |g 5.4.1.  |t Incidence of panel dropouts --  |g 5.4.2.  |t Attrition rates --  |g 5.4.3.  |t Survival analysis: Kaplan-Meier survival curves and Cox regression models for attrition --  |g 5.4.4.  |t Respondent attrition vs. data attrition: Cox regression model with shared frailty --  |g 5.5.  |t Discussion and conclusion --  |t References --  |g pt. II  |t NONRESPONSE --  |t Introduction to Part II /  |r Paul J. Lavrakas --  |g 6.  |t Nonresponse and attrition in a probability-based online panel for the general population /  |r Annette Scherpenzeel --  |g 6.1.  |t Introduction --  |g 6.2.  |t Attrition in online panels versus offline panels --  |g 6.3.  |t The LISS panel --  |g 6.3.1.  |t Initial nonresponse --  |g 6.4.  |t Attrition modeling and results --  |g 6.5.  |t Comparison of attrition and nonresponse bias --  |g 6.6.  |t Discussion and conclusion --  |t References --  |g 7.  |t Determinants of the starting rate and the completion rate in online panel studies /  |r Anja S. Goritz --  |g 7.1.  |t Introduction --  |g 7.2.  |t Dependent variables --  |g 7.3.  |t Independent variables --  |g 7.4.  |t Hypotheses --  |g 7.5.  |t Method --  |g 7.6.  |t Results --  |g 7.6.1.  |t Descriptives --  |g 7.6.2.  |t Starting rate --  |g 7.6.3.  |t Completion rate --  |g 7.7.  |t Discussion and conclusion --  |g 7.7.1.  |t Recommendations --  |g 7.7.2.  |t Limitations --  |t References --  |g 8.  |t Motives for joining nonprobability online panels and their association with survey participation behavior /  |r Wolfgang Mayerhofer --  |g 8.1.  |t Introduction --  |g 8.2.  |t Motives for survey participation and panel enrollment --  |g 8.2.1.  |t Previous research on online panel enrollment --  |g 8.2.2.  |t Reasons for not joining online panels --  |g 8.2.3.  |t The role of monetary motives in online panel enrollment --  |g 8.3.  |t Present study --  |g 8.3.1.  |t Sample --  |g 8.3.2.  |t Questionnaire --  |g 8.3.3.  |t Data on past panel behavior --  |g 8.3.4.  |t Analysis plan --  |g 8.4.  |t Results --  |g 8.4.1.  |t Motives for joining the online panel --  |g 8.4.2.  |t Materialism --  |g 8.4.3.  |t Predicting survey participation behavior --  |g 8.5.  |t Conclusion --  |g 8.5.1.  |t Money as a leitmotif --  |g 8.5.2.  |t Limitations and future work --  |t References --  |t Appendix 8.A --  |g 9.  |t Informing panel members about study results: Effects of traditional and innovative forms of feedback on participation /  |r Vera Toepoel --  |g 9.1.  |t Introduction --  |g 9.2.  |t Background --  |g 9.2.1.  |t Survey participation --  |g 9.2.2.  |t Methods for increasing participation --  |g 9.2.3.  |t Nonresponse bias and tailored design --  |g 9.3.  |t Method --  |g 9.3.1.  |t Sample --  |g 9.3.2.  |t Experimental design --  |g 9.4.  |t Results --  |g 9.4.1.  |t Effects of information on response --  |g 9.4.2.  |t The perfect panel member versus the sleeper --  |g 9.4.3.  |t Information and nonresponse bias --  |g 9.4.4.  |t Evaluation of the materials --  |g 9.5.  |t Discussion and conclusion --  |t References --  |t Appendix 9.A --  |g pt. III  |t MEASUREMENT ERROR --  |t Introduction to Part III /  |r Mario Callegaro --  |g 10.  |t Professional respondents in nonprobability online panels /  |r McKenzie Young --  |g 10.1.  |t Introduction --  |g 10.2.  |t Background --  |g 10.3.  |t Professional respondents and data quality --  |g 10.4.  |t Approaches to handling professional respondents --  |g 10.5.  |t Research hypotheses --  |g 10.6.  |t Data and methods --  |g 10.7.  |t Results --  |g 10.8.  |t Satisfying behavior --  |g 10.9.  |t Discussion --  |t References --  |t Appendix 10.A --  |g 11.  |t The impact of speeding on data quality in nonprobability and freshly recruited probability-based online panels /  |r Harald Schoen --  |g 11.1.  |t Introduction --  |g 11.2.  |t Theoretical framework --  |g 11.3.  |t Data and methodology --  |g 11.4.  |t Response time as indicator of data quality --  |g 11.5.  |t How to measure speeding? --  |g 11.6.  |t Does speeding matter? --  |g 11.7.  |t Conclusion --  |t References --  |g pt.  
505 0 0 |t IV  |t WEIGHTING ADJUSTMENTS --  |t Introduction to Part IV --  |t Jelke Bethlehem and Mario Callegaro --  |g 12.  |t Improving web survey quality: Potentials and constraints of propensity score adjustments /  |r Silvia Biffignandi --  |g 12.1.  |t Introduction --  |g 12.2.  |t Survey quality and sources of error in nonprobability web surveys --  |g 12.3.  |t Data, bias description, and PSA --  |g 12.3.1.  |t Data --  |g 12.3.2.  |t Distribution comparison of core variables --  |g 12.3.3.  |t Propensity score adjustment and weight specification --  |g 12.4.  |t Results --  |g 12.4.1.  |t Applying PSA: The comparison of wages --  |g 12.4.2.  |t Applying PSA: The comparison of socio-demographic and wage-related covariates --  |g 12.5.  |t Potentials and constraints of PSA to improve nonprobability web survey quality: Conclusion --  |t References --  |t Appendix 12.A --  |g 13.  |t Estimating the effects of nonresponses in online panels through imputation /  |r Weiyu Zhang --  |g 13.1.  |t Introduction --  |g 13.2.  |t Method --  |g 13.2.1.  |t The Dataset --  |g 13.2.2.  |t Imputation analyses --  |g 13.3.  |t Measurements --  |g 13.3.1.  |t Demographics --  |g 13.3.2.  |t Response propensity --  |g 13.3.3.  |t Opinion items --  |g 13.4.  |t Findings --  |g 13.5.  |t Discussion and conclusion --  |t Acknowledgement --  |t References --  |g pt. V  |t NONRESPONSE AND MEASUREMENT ERROR --  |t Introduction to Part V /  |r Jon A. Krosnick --  |g 14.  |t The relationship between nonresponse strategies and measurement error: Comparing online panel surveys to traditional surveys /  |r Justin Wedeking --  |g 14.1.  |t Introduction --  |g 14.2.  |t Previous research and theoretical overview. 
505 0 0 |g Note continued:  |g 14.3.  |t Does interview mode moderate the relationship between nonresponse strategies and data quality? --  |g 14.4.  |t Data --  |g 14.4.1.  |t Study 1:2002 GfK/Knowledge Networks study --  |g 14.4.2.  |t Study 2:2012 GfK/KN study --  |g 14.4.3.  |t Study 3: American National Election Studies --  |g 14.5.  |t Measures --  |g 14.5.1.  |t Studies 1 and 2 dependent variables: Measures of satisficing --  |g 14.5.2.  |t Study 3 dependent variable: Measure of satisficing --  |g 14.5.3.  |t Studies 1 and 2 independent variables: Nonresponse strategies --  |g 14.5.4.  |t Study 3 independent variable --  |g 14.6.  |t Results --  |g 14.6.1.  |t Internet mode --  |g 14.6.2.  |t Internet vs. telephone --  |g 14.6.3.  |t Internet vs. face-to-face --  |g 14.7.  |t Discussion and conclusion --  |t References --  |g 15.  |t Nonresponse and measurement error in an online panel: Does additional effort to recruit reluctant respondents result in poorer quality data? /  |r Patrick Sturgis --  |g 15.1.  |t Introduction --  |g 15.2.  |t Understanding the relation between nonresponse and measurement error --  |g 15.3.  |t Response propensity and measurement error in panel surveys --  |g 15.4.  |t The present study --  |g 15.5.  |t Data --  |g 15.6.  |t Analytical strategy --  |g 15.6.1.  |t Measures and indicators of response quality --  |g 15.6.2.  |t Taking shortcuts --  |g 15.6.3.  |t Response effects in attitudinal variables --  |g 15.7.  |t Results --  |g 15.7.1.  |t The relation between recruitment efforts and panel cooperation --  |g 15.7.2.  |t The relation between panel cooperation and response quality --  |g 15.7.3.  |t Common causes of attrition propensity and response quality --  |g 15.7.4.  |t Panel conditioning, cooperation and response propensity --  |g 15.8.  |t Discussion and conclusion --  |t References --  |g pt. VI  |t SPECIAL DOMAINS --  |t Introduction to Part VI /  |r Anja S. Goritz --  |g 16.  |t An empirical test of the impact of smartphones on panel-based online data collection /  |r Frank Drewes --  |g 16.1.  |t Introduction --  |g 16.2.  |t Method --  |g 16.3.  |t Results --  |g 16.3.1.  |t Study 1: Observation of survey access --  |g 16.3.2.  |t Study 2: Monitoring of mobile survey access --  |g 16.3.3.  |t Study 3: Smartphone-related usage behavior and attitudes --  |g 16.3.4.  |t Study 4: Experimental test of the impact of survey participation via smartphone on the quality of survey results --  |g 16.4.  |t Discussion and conclusion --  |t References --  |g 17.  |t Internet and mobile ratings panels /  |r Mario Callegaro --  |g 17.1.  |t Introduction --  |g 17.2.  |t History and development of Internet ratings panels --  |g 17.3.  |t Recruitment and panel cooperation --  |g 17.3.1.  |t Probability sampling for building a new online Internet measurement panel --  |g 17.3.2.  |t Nonprobability sampling for a new online Internet measurement panel --  |g 17.3.3.  |t Creating a new panel from an existing Internet measurement panel --  |g 17.3.4.  |t Screening for eligibility, privacy and confidentiality agreements, gaining cooperation, and installing the measurement system --  |g 17.3.5.  |t Motivating cooperation --  |g 17.4.  |t Compliance and panel attrition --  |g 17.5.  |t Measurement issues --  |g 17.5.1.  |t Coverage of Internet access points --  |g 17.5.2.  |t Confounding who is measured --  |g 17.6.  |t Long tail and panel size --  |g 17.7.  |t Accuracy and validation studies --  |g 17.8.  |t Statistical adjustment and modeling --  |g 17.9.  |t Representative research --  |g 17.10.  |t The future of Internet audience measurement --  |t References --  |g pt. VII  |t OPERATIONAL ISSUES IN ONLINE PANELS --  |t Introduction to Part VII /  |r Anja S. Goritz --  |g 18.  |t Online panel software /  |r Tim Macer --  |g 18.1.  |t Introduction --  |g 18.2.  |t What does online panel software do? --  |g 18.3.  |t Survey of software providers --  |g 18.4.  |t A typology of panel research software --  |g 18.4.1.  |t Standalone panel software --  |g 18.4.2.  |t Integrated panel research software --  |g 18.4.3.  |t Online research community software --  |g 18.5.  |t Support for the different panel software typologies --  |g 18.5.1.  |t Mobile research --  |g 18.6.  |t The panel database --  |g 18.6.1.  |t Deployment models --  |g 18.6.2.  |t Database architecture --  |g 18.6.3.  |t Database limitations --  |g 18.6.4.  |t Software deployment and data protection --  |g 18.7.  |t Panel recruitment and profile data --  |g 18.7.1.  |t Panel recruitment methods --  |g 18.7.2.  |t Double opt-in --  |g 18.7.3.  |t Verification --  |g 18.7.4.  |t Profile data capture --  |g 18.8.  |t Panel administration --  |g 18.8.1.  |t Member administration and opt-out requests --  |g 18.8.2.  |t Incentive management --  |g 18.9.  |t Member portal --  |g 18.9.1.  |t Custom portal page --  |g 18.9.2.  |t Profile updating --  |g 18.9.3.  |t Mobile apps --  |g 18.9.4.  |t Panel and community research tools --  |g 18.10.  |t Sample administration --  |g 18.11.  |t Data capture, data linkage and interoperability --  |g 18.11.1.  |t Updating the panel history: Response data and survey paradata --  |g 18.11.2.  |t Email bounce-backs --  |g 18.11.3.  |t Panel enrichment --  |g 18.11.4.  |t Interoperability --  |g 18.12.  |t Diagnostics and active panel management --  |g 18.12.1.  |t Data required for monitoring panel health --  |g 18.12.2.  |t Tools required for monitoring panel health --  |g 18.13.  |t Conclusion and further work --  |g 18.13.1.  |t Recent developments: Communities and mobiles --  |g 18.13.2.  |t Demands for interoperability and data exchange --  |g 18.13.3.  |t Panel health --  |g 18.13.4.  |t Respondent quality --  |t References --  |g 19.  |t Validating respondents' identity in online samples: The impact of efforts to eliminate fraudulent respondents /  |r Jacob Tucker --  |g 19.1.  |t Introduction --  |g 19.2.  |t The 2011 study --  |g 19.3.  |t The 2012 study --  |g 19.4.  |t Results --  |g 19.4.1.  |t Outcomes from the validation process --  |g 19.4.2.  |t The impact of excluded respondents --  |g 19.5.  |t Discussion --  |g 19.6.  |t Conclusion --  |t References. 
588 |a Description based on print version record. 
650 0 |a Panel analysis. 
650 0 |a Internet research. 
650 0 |a Marketing research  |x Methodology. 
650 0 |a Social sciences  |x Methodology. 
700 1 |a Callegaro, Mario,  |e editor. 
730 0 |a WILEYEBA 
776 0 8 |i Online version:  |t Online panel research.  |d Chichester, West Sussex ; Hoboken, NJ : John Wiley & Sons Inc., 2014  |z 9781118763506  |w (DLC) 2014000677 
776 0 8 |c Original  |z 9781119941774  |z 1119941776  |w (DLC) 2013048411 
830 0 |a Wiley series in survey methodology. 
856 4 0 |u https://ezproxy.mtsu.edu/login?url=https://onlinelibrary.wiley.com/book/10.1002/9781118763520  |z CONNECT  |3 Wiley  |t 0 
949 |a ho0 
975 |p Wiley UBCM Online Book All Titles thru 2023 
976 |a 6006612 
998 |a wi  |d z 
999 f f |s 93a209c6-07bd-4028-9115-8e00c58bdcab  |i 93a209c6-07bd-4028-9115-8e00c58bdcab  |t 0 
952 f f |a Middle Tennessee State University  |b Main  |c James E. Walker Library  |d Electronic Resources  |t 0  |e H61.26 .O55 2014  |h Library of Congress classification