[1] AAPOR (2014). Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys. Report of the AAPOR Task Force on Emerging Technologies in Public Opinion Research, April 25, 2014.[Link]
[2] AAPOR (2016). AAPOR Address-based Sampling, January 2016.[Link]
[3] AAPOR (2016). AAPOR Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys (Revised 2016).[Link]
[4] AAPOR Task Force Report.[Link]
[5] Alvarez, R. M. and VanBeselaere, C., Web-Based Surveys. Developed at and hosted by The College of Information Sciences and Technology.[Link]
[6] Antoun, C., Zhang, C., Conrad, F. G., and Schober, M. F. (2015). Comparisons of Online Recruitment Strategies for Convenience Samples: Craigslist, Google AdWords, Facebook, and Amazon Mechanical Turk. Field Methods, 28(3), 231-246.
[7] Anderson, J. and Rainie, L (2014). The Web at 25 in the US - Digital Life in 2025 Experts predict the Internet will become 'like electricity" - less visible, yet more deeply embedded in peoples's lives for good and ill, March 11, 2014.[Link]
[8] Bailenson, J. N., Iyenger, S., Yee, N., and Collins, N. A. (2008). Facial Similarity between Vorters and Candidates Causes Influence. Public Opinion Quarterly, 72, 935-961.
[9] Baker, R., Brick, J. M., Bates, N. A., Battaglia, M., Couper, M. P., Dever, J. A., Gile, K. J., and Tourangeau, R. (2013). Summary Report of the AAPOR Task Force on Non-Probability Sampling. Journal of Survey Statistics and Methodology, 1(2), 90-143.
[10] Baker, R. P. (1998). The CASIC Future, In Couper, M. P. et al. (1998). Computer Assisted Survey Information Collection (Chapter 29), Wiley Series in Probability and Statistics, John Wiley & Sons.
[11] Bandilla, W., Couper, M. P., and Kaczmirek, L. (2012). The Mode of Invitation for Web Surveys. Survey Practice, 5(3).[Link]
[12] Bandilla, W., Couper, M. P., and Kaczmirek, L. (2014). The Effectiveness of Mailed Invitations for Web Surveys and the Representativeness of Mixed-Mode versus Internet-only Samples, Survey Practice, 7(4).
[13] Bates, N. (2009). Cell Phone-Only Households: A Good Target for Internet Surveys? Survey Practice, 2(7).[Link]
[14] Batinic, B., Reips, U.-D., and Bosnjak, M. (eds.) (2002). Online Social Sciences, Hogrefe.[Link]
[15] Behr, D., Bandilla, W., Kaczmirek, L., and Braun, M. (2014). Cognitive Probes in Web Surveys: On the Effect of Different Text Box Size and Probing Exposure on Response Quality. Social Science Computer Review, 32(4), 524-533.[Link]
[16] Bethlehem, J. (2016). Solving the Nonresponse Problem with Sample Matching? Social Science Computer Review, 34(1), 59-77.[Link]
[17] Bethlehem, J. and Biffignandi, S. (2011). Handbook of Web Surveys, John Wiley & Sons.[Link]
[18] Bianchi, A. and Biffignandi, S. (2013). Web Panel Representativeness. In Giudici, P., Ingrassia, S., and Vichi, M.(eds.), Statistical Models for Data Analysis (pp. 37-44), Studies in Classification, Data Analysis, and Knowledge Organization. Springer Verlag.[Link]
[19] Biemer, P. P., de Leeuw, E. D., Eckman, S., Edwards, B., Kreuter, F., Lyberg, L. E., Tucker, N. C., and West, B. T. (eds.) (2017). Total Survey Error in Practice, Wiley Series in Survey Methodology, John Wiley & Sons.
[20] Biemer, P. P. and Lyberg, L. E. (2003). Introduction to Survey Quality, Wiley Series in Survey Methodology, John Wiley & Sons.
[21] Blom, A. G., Gathmann, C., and Krieger, U.(2015). Setting Up an Online Panel Representative of the General Population: The German Internet Panel. Field Methods, 27(4), 391-408.[Link]
[22] Blyth, B. (1998). Current and Future Technology Utilization in European Market Research, In Couper, M. P. et al. (eds.), Computer Assisted Survey Information Collection (Chapter 28), Wiley Series in Probability and Statistics, John Wiley & Sons.
[23] Bowman, J. (2007). Hands up, Who's on an Access Panel? ESOMAR Research World Article, published ESOMAR, April 2007.
[24] Brewer, K. (2013). Three Controversies in the History of Survey Sampling. Survey Methodology, 39(2), 249-262.
[25] Burns, N. (2006). The Michigan, then National, then American National Election Studies. [Link]
[26] Callegaro, M. (2010). Do You Know Which Device Your Respondent Has Used to Take Your Online Survey? Survey Practice, 3(6).[Link]
[27] Callegaro, M. (2014). Recent Books and Journals in Public Opinion, Survey Methods, and Survey Statistics, Survey Practice, 7(2). [Link]
[28] Callegaro, M. (2015). Using Paradata to Better Interpret Online Experiments and Non-experiments--Examples with Web Surveys. Workshop on Innovations in Online Experiments, Nuffield College, Oxford March 13, 2015.[Link]
[29] Callegaro, M., and DiSogra, C. (2008). Computing Response Metrics for Online Panels. Public Opinion Quarterly, 72, 1008-1032.[Link]
[30] Callegaro, M., Baker, R., Bethlehem, J., Göritz, A. S., Krosnick, J. A., and Lavrakas, P. J. (eds.) (2014). Online Panel Research:A Data Quality Perspective, Wiley Series in Survey Methodology, John Wiley & Sons.
[31] Callegaro, M., Manfreda, K. L., and Vehovar, V. (2015). Web Survey Methodology, SAGE.
[32] Caspar, R., and Couper, M. P. (1997) . Using Keystroke Files to Assess Respondent Difficulties with an ACASI Instrument, Survey Research Methods. In Proceedings of the Annual Meeting of the American Statistical Association, pp.239-244.
[33] Clayton, R. L. and Werking, G. S. (1998). Business Surveys of the Future: The World Wide Web as a Data Collection Methodology. In Couper, M. P. et al. (eds.), Computer Assisted Survey Information Collection (Chapter 27), Wiley Series in Probability and Statistics, John Wiley & Sons.
[34] Cochran, W. G. (1953). Sampling Techniques (first edition), John Wiley & Sons.
[35] Conrad, F. G. and Schober, M. F. (1999). Conversational Interviewing and Data Quality. In Proceedings of the Federal Committee on Statistical Methodology Research Conference, Office of Survey Methods Research, Bureau of Labor Statistics.[Link]
[36] Couper, M. P. (1998), Measuring Survey Quality in a CASIC Environment. In Proceedings of the Survey Research Methods Section of the American Statistical Association, pp.41-49.[Link]
[37] Couper, M. P. (2000). Usability Evaluation of Computer-assisted Survey Instruments. Social Science Computer Review, 18(4), 384-396.
[38] Couper, M. P. (2001). Web Survey Research: Challenges and Opportunities. In Proceedings of the Annual Meeting of the American Statistical Association, Section on Survey Research Methods, August 5-9, 2001.[Link]
[39] Couper, M. P. (2005). Technology Trends in Survey Data Collection. Social Science Computer Review, 23(4), 486-501.[Link]
[40] Couper, M. P. (2011). The Future of Modes of Data Collection. Public Opinion Quarterly, 75 (5), 889-908.[Link]
[41] Couper, M. P. (2013). Is the Sky Falling? New Technology, Changing Media and the Future of Surveys. Survey Research Methods, 7(3), 145-156.[Link]
[42] Couper, M. P. (2014). Introduction to Web Survey Paradata. WebDataNet 2nd Training School: Paradata, Alexandroupolis, Greece, 1-3 October 2014.
[43] Couper, M. P. (2015). The Role of the Web in National Health Surveys. 2015 National Conference on Health Statistics, National Center for Health Statistics.[スライド資料][Link]
[44] Couper, M. P. (2017). Birth and Diffusion of the Concept of Paradata. Advances in Social Research, No. 18, Advances in Social Research, March 2017.
[45] Couper, M. P., Antoun, C., and Mavletova, A. (2017). Mobile Web Surveys: A Total Survey Error Perspective. In Biemer, P., de Leeuw, E., Eckman, S., Edwards, B., Kreuter, F., Lyberg, L. E., Tucker, N. C., and West, B. T. (eds.), Total Survey Error in Practice (pp. 133-154), John Wiley & Sons.
[46] Couper, M. P., Afstedal, M. B., and Lee, S. (2013). Encouraging Record Use for Financial Asset Questions in a Web Survey. Journal of Survey Statistics and Methodology, 1(2), 171-182.
[47] Couper, M. P., Baker, R. P., Bethlehem, J., Clark, C. Z., Martin, J., Nicholls II, W. L., and O'Reilly, J. M. (1998). Computer Assisted Survey Information Collection, Wiley Series in Probability and Statistics, John Wiley & Sons.
[48] Couper, M. P. and Kreuter, F. (2013). Using Paradata to Explore Item Level Response Times in Surveys. J. R. Statist. Soc. A, 176, Part 1, 271-286.
[49] Couper, M. P. and Lyberg, L. (2005). The Use of Paradata in Survey Research. In Proceedings of the 55th Session of the ISI, Sydney, Australia.
[50] Couper, M. P. and Miller, P. V. (2008). Web Survey Methods: Introduction. Public Opinion Quarterly, 72, 831-835.[Link]
[51] Couper, M. P. and Peterson, G. J. (2016). Why Do Web Surveys Take Longer on Smart-phones? Social Science Computer Review, 35(3), 357-377.[Link]
[52] Couper, M. P. and Rowe, B. (1996). Evaluation of a Computer-assisted Self-interview Component in a Computer-assisted Personal Interview Survey. Public Opinion Quarterly, 60, 89-105.
[53] Couper, M. P. and Singer, E. (2013). Informed Consent for Web Paradata Use. Survey Research Methods, 7(1), 57-67.[Link]
[54] Couper, M. P. and Zhang, C. (2016). Helping Respondents Provide Good Answers in Web Surveys. Survey Research Methods, 10(1), 49-64.[Link]
[55] Dahlhamer, J. M. and Simile, C. M. (2009). Subunit Nonresponse in the National Health Interview Survey (NHIS): An Exploration Using Paradata. In Proceedings of the Annual Meeting of the American Statistical Association, Section on Government Statistics, JSM 2009, pp. 262-276.
[56] Das, M. (2012). Innovation in Online Data Collection for Scientific Research: the Dutch MESS Project. Methodological Innovations Online, 7(1), 7-24.[Link]
[57] Das, M., Ester, P., and Kaczmirek, L. (eds.) (2011). Social and Behavioral Research and the Internet-Advances in Applied Methods and Research Strategies, Routledge (Taylor and Francis Group).
[58] de Leeuw, E. D. (2008). The Effect of Computer-Assisted Interviewing on Data Quality: A Review of the Evidence, In Blasius, J., Hox, J. J., de Leeuw, E. D., and Schmidt, P. (eds.), Social Science Methodology in the New Millennium, Leske + Budrich., 2002 (CD-ROM).
[59] de Leeuw, E. D. (2008). To Mix or not to Mix Data Collection Modes in Surveys. Journal of Official Statistics, 21(2), 233-255. [Link]
[60] de Leeuw, E. D. (2010). Mixed-Mode Surveys and the Internet. Survey Practice, 3(6).[Link]
[61] de Leeuw, E. D. (2017). Never A Dull Moment: Mixed-Mode Surveys In Past, Present & Future. Key note at the 2017 ESRA Conference, Lisbon (slide).[Link]
[62] de Leeuw, E. D. and Berzelak, N. (2016). Survey Mode or Survey Modes? In Wolf, C., Joye, D., Smith, T. W., and Fu, Y. (eds.), The Sage Handbook of Survey Methodology (Chapter 11). Los Angeles, SAGE.
[63] de Leeuw, E. D. and Hox, J. J. (1999). The Influence of Data Collection Method on Structural Models - A Comparison of a Mail, a Telephone, and a Face-to-Face Survey. Sociologocal Methods & Research, 24(4), 443-472.[Link]
[64] de Leeuw, E. D., Hox, J. J., and Dillman, D. A. (2008). Mixed-mode Surveys: When and Why. In de Leeuw, E. D., Hox, J. J., and Dillman, D. A. (eds.), International Handbook of Survey Methodologies (pp. 299-316), Routledge (Taylor and Francis Group).
[65] de Leeuw, E. D., Hox, J. J., and Dillman, D. A. (eds.) (2008). International Handbook of Survey Methodology, Routledge (Taylor and Francis Group).
[66] de Leeuw, E. D., Hox, J. J., and Boevé, A. (2015). Handling Do-Not-Know Answers: Exploring New Approaches in Online and Mixed-Mode Surveys. Social Science Computer Review, 34(1), 116-132.[Link]
[67] Deming, W. E. (1950). Some Theory of Sampling, Dover Books on Mathematics.
[68] Dillman, D. A. and Bowker, D. K. (2001). The Web Questionnaire Challenge to Survey Methodologists. In Reips, U.-D. and Bosnjak, M. (eds.), Dimensions of Internet Science (pp. 159-178), Pabst Science.[Link]
[69] Dillman, D. A. and Bowker, D. K. (2002). The Web Questionnaire Challenge to Survey Methodologists. In Batinic, B., Reips, U.-D. and Bosnjak, M. (eds.) , Online Social Science (pp. 159-178), Hogrefe & Huber.
[70] Dillman, D. A. and Christian, L. M. (2003). Survey Mode as a Source of Instability in Responses across Surveys, at the Worksjop on Stability of Methods for Collectin, Analyzing and Managing Panel Data, American Academy of Artts and Sciencies, Cambridge, MA, March 26-28, 2003.[Link]
[71] Dillman, D. A. and Christian, L. M. (2003). Survey Mode as a Source of Instability in Responses across Surveys. Field Methods, 17(1), 30-52.
[72] Dillman, D. A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J., and Messer, B. L. (2009). Response Rate and Measurement Differences in Mixed-Mode Surveys Using Mail, Telephone, Interactive Voice Response (IVR) and the Internet. Social Science Research, 38(1), 1-18.[Link]
[73] Dillman, D. A., Smyth, J. D., and Christian, L. M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys-The Tailored Design Method (fourth edition), John Wiley & Sons.
[74] DiSogra, C. and Callegaro, M. (2009). Computing Response Rates for Probability-Based Web Panels, at the 2009 Joint Statistical Meetings, Section on Survey Research Methods, JSM-2009.[Link]
[75] DiSogra, C. and Callegaro, M. (2016). Metrics and Design Tool for Building and Evaluating Probability-Based Online Panels. Social Science Computer Review, 34(1), 26-40.[Link]
[76] Dougla, R. (2007). Sample Matching for Web Surveys: Theory and Application. White Paper Prepared from Presentation Given at the 2007 Joint Statistical Meetings, Section on Survey Research Methods. JSM-2007, Salt Lake City, Utah, July-August.[Link]
[77] Duggan, M. (2013). Cell Phone Activities 2013, September 19, 2013.[Link]
[78] Dumičić, K., Sajko, M., and Radošević, D. (2002). Designing a Web-survey Questionnaire Using Automatic Process and a Script Language. Journal of Information and Organizational Sciences, 26(1-2), 25-41.[Link]
[79] Durrant, G. B. and D’Arrigo, J. (2013). Analyzing Interviewer Call Record Data by Using a Multilevel Discrete Time Event History Modeling Approach. J. R. Statist. Soc. A, 176, Part 1, 251-269.
[80] Eckman, S. and de Leeuw, E. D. (2017). Editorial-Special Issue on Total Survey Error (TSE). Journal of Official Statistics, 33(2), 301.[Link]
[81] Elliott, M. R. and Valliant, R. (2017). Inference for Nonprobability Samples. Statistical Science, 32(2), 249-264.
[82] Emde, M. and Fuchs, M. (2012). Exploring Animated Faces Scales in Web Surveys: Drawbacks and Prospects. Survey Practice, 5(1).
[83] ESOMAR (2011). Conductiong Market and Opinion Research Using the Internet, ESOMAR World Research Codes & Guidelines.
[84] ESOMAR (2011). ESOMAR Guideline for Online Research.[Link]
[85] ESOMAR (2012). 28 Questions to Help Research Buyers of Online Samples.[Link]
[86] ESOMAR (2012). ESOMAR Guideline for Conducting Mobile Market Research.[Link]
[87] ESOMAR (2015). ESOMAR/GRBN Guideline for Online Sample Quality 2015.
[88] ESOMAR (2015). ESOMAR/GRBN Online-Research Guideline, 2015.[Link]
[89] ESOMAR (2017). ESOMAR/GRBN Guideline on Mobile Research.[Link]
[90] Everitt, B. S. (1992). The Analysis of Contingency Tables (second edition), Monographs on Statistics and Applied Probability 45, Chapman & Hall.
[91] Fielding, N. G., Lee, R. M., and Blank, G. (eds.) (2016). The SAGE Handbook of Online Research Methods (second edition), SAGE.[Link]
[92] Fienberg, S. E. (1987). The Analysis of Cross-classified Categorical Data (second edition), MIT Press.
[93] File, T., and Ryan, C. (2014). Computer and Internet Use in the United States: 2013, American Community Survey Reports, ACS-28, U.S. Census Bureau, Washington, DC, 2014.[Link]
[94] Fox, S. and Rainie, L. (2014). The Web at 25 in the U.S.―The Overall Verdict: The internet has been a plus for society and an especially good thing for individual users―, February 27, 2014.
[95] Fricker, R. D. (2008). Sampling Methods for Web and E-mail Surveys. In Fielding, N. G., Lee, R. M., and Blank, G. (eds.), The SAGE Handbook of Online Research Methods (pp. 195-217), SAGE.[Link1][Link2]
[96] Fuchs, M. and Busse, B. (2009). The Coverage Bias of Mobile Web Surveys Across European Countries, International Journal of Internet Science, 2009, 4 (1), 21-33.
[97] Ganassali, S. (2008). The Influence of the Design of Web Survey Questionnaires on the Quality of Responses. Survey Research Methods, 2(1), 21-32.[Link]
[98] Gelman, A. and Carlin, J. B. (2002). Poststratification and Weighting Adjustments, In Groves, R. M. et al. (eds.), Survey Nonresponse (Chapter 19), Wiley Series in Probability and Statistics, John Wiley & Sons.
[99] Göritz, A.S. (2011). Web Panels: Replacement Technology for Market Research, In Hesse-Biber, S.N. (eds.) , The Handbook of Emergent Technologies in Social Research (pp. 221-235), Oxford University Press.
[100] Greenwood, S., Perrin, A. and Duggan, M. (2016). Social Media Update 2016- Facebook usage and engagement is on the rise, while adoption of other platforms holds steady-, November 11, 2016. [Link]
[101] Grover, R. and Vriens, M. (eds.) (2006). The Handbook of Marketing Research-Uses, Misuses, and Future Advances, SAGE.
[102] Groves, R. M. and Heeringa, S. G. (2006). Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs. J. R. Statist. Soc. A, 169, Part 3, 439-457.
[103] Groves, R. M. et al. (2008). Issues Facing the Field: Alternative Practical Measures of Representativeness of Survey Respondent Pools. Survey Practice, 1(3).[Link]
[104] Groves, R. M., Dillman, D.A., Eltinge, J.L., and Little, R.J.A. (eds.) (2002). Survey Nonresponse, Wiley Series in Probability and Statistics, John Wiley & Sons.
[105] Groves, R. M., Presser, G., Tourangeau, G., West, B.T., Couper, M.P., Singer, E., and Toppe, C. (2012). Support for the Survey Sponsor and Nonresponse Bias. Public Opinion Quarterly, 76(3), 512-524.
[106] Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., and Tourangeau, R. (2004, 2009). Survey Methodology, Wiley Series in Survey Methodology, John Wiley & Sons.
[107] Hansen, M. H., Hurwitz, W. N., and Madow, W. G. (1953). Sample Survey Methods and Theory, John Wiley & Sons.
[108] Haraldsen, G. (2004). Identifying and Reducing Response Burdens in Internet Business Surveys. Journal of Official Statistics, 20(2), 393-410.
[109] Harter, R. (2016). Why the Resurgence in Interest in Address-based Sampling? The Survey Statistician, The Newsletter of the International Association of Survey Statisticians (IASS), Newsletter No. 74, RTI International, pp.13-14.[Link]
[110] Haziza, D. and Beaumont, J.-F. (2017). Construction of Weights in Surveys: A Review. Statistical Science, 32(2), 206-226.
[111] Heerwegh, D. (2004). Use of Client Side Paradata in Web Surveys. Paper Presented at the International Symposium in Honour of Paul Lazarsfeld (Brussels, Belgium June 4-5, 2004).[Link]
[112] Heerwegh, D. (2004). Using Progress Indicators in Web Surveys. Paper Presented at the 59th AAPOR Conference.
[113] Heerwegh, D. (2011). Internet Survey Paradata. In Das, M., Ester, P., and Kaczmirek, L. (eds.), Social and Behavioral Research and the Internet -Advances in Applied Methods and Research Strategies (pp. 325-348), Routledge (Taylor and Francis Group).
[114] Heerwegh, D. (2011). The CSP Project Webpage. Technical Report October 20, 2011.
[115] Hesse-Biber, S. N. (eds.) (2011). The Handbook of Emergent Technologies in Social Research, Oxford University Press.
[116] Hillygus, D. S., Jackson, N., and Young, M. (2014). Professional Respondents in Non-probability Online Panels. In Callegaro, M., Baker, R., Bethlehem, J., Göritz, A. S., Krosnick, J. A. and Lavrakas, P. J. (eds.), Online Panel Research: A Data Quality Perspective (pp. 219-237), Wiley Series in Survey Methodology, John Wiley & Sons.[Link]
[117] Höglinger, M., Jann, B., and Diekmann, A. (2016). Sensitive Questions in Online Surveys: An Experimental Evaluation of Different Implementations of the Randomized Response Technique and the Crosswise Model, Survey Research Methods, 10(3), 171-187.[Link]
[118] Hox, J.J., de Leeuw, E., and Klausch, T. (2017). Mixed Mode Research: Issues in Design and Analysis. In Biemer, P. P., de Leeuw, E., Eckman, S., Edwards, B., Kreuter, F., Lyberg, L. E., Tucker, N. C., and West, B. T. (eds.), Total Survey Error in Practice (pp. 511-530), Wiley Series in Survey Methodology, John Wiley & Sons.
[119] Hox, J. J. and de Leeuw, E. D. (1994). A Comparison of Nonresponse in Mail, Telephone, and Face-to-Face Surveys-Applying Multilevel Modeling to Meta-Analysis. Quality & Quantity, 28(4), 329-344.[Link]
[120] Hubbard, F., Antoun, C., and Conrad, F. G. (2012). Conversational Interviewing, the Comprehension of Opinion Questions and Nonverbal Sensitivity. Paper Presented at the Annual Meeting of the American Association for Public Opinion Research.[Link]
[121] Iannacchione, V. G., McMichael, J. P., Shook-Sa, B. E., and Morton, K. B. (2012). A Proposed Hybrid Sampling Frame for the National Survey on Drug Use and Health. Prepared for the Substance Abuse and Mental Health Services Administration, Office of Applied Studies, under Contract No. 283-2004-00022, RTI/0209009.
[122] Joe, S., Tourangeau, R., Krosnick, J. A., Ackermann, A., Malka, A., DeBell, M., and Turakhia, C. (2009). Dispositions and Outcome Rates in the ‘Face-to-Face/Internet Survey Platform’ (the FFISP). Paper presented at the Annual Meeting of the American Association of Public Opinion Research, Hollywood, FL, USA.
[123] Kamoen, N., Holleman, B., and van den Bergh, H. (2013). Positive, Negative, and Bipolar Questions: The Effect of Question Polarity on Ratings of Text Readability. Survey Research Methods, 7(3), 181-189.[Link]
[124] Keeter, S. (2015). Methods can matter: Where Web surveys produce different results than phone interviews, May 14, 2015.[Link]
[125] Keeter, S. and Christian, L. (2012). A Comparison of Results from Surveys by the Pew Research Center and Google Consumer Surveys, November 7, 2012.[Link]
[126] Keeter, S., McGeeney, and Weisel, R. (2015). Coverage Error in Internet Surveys - Who Web-only surveys miss and how that affects resuts -, September 22, 2015.[Link]
[127] Keeter, S. and Weisel, R. (2015). From Telephone to the Web: The Challenge of Mode of Interview Effects in Public Opinion Polls. Pew Research Center Report, May 13, 2015.[Link]
[128] Kennedy, C., Mercer, A., Keeter, S. Hatley, N. McGeeney, K, and Gimenez, A. (2016). Evaluating Online Nonprobability Surveys Vendor choice matters; widespread errors found for estimates based on blacks and Hispanics. May 2, 2016.[Link]
[129] Kennedy, C., Mercer, A., Keeter, S., Hatley, N., McGeeney,K., and Gimenez, A. (2016). Evaluating Online Nonprobability Surveys: Vendor choice matters; widespread errors found for estimates based on blacks and Hispanics, May 2, 2016.[Link]
[130] Khare, M. (2016). Estimated Prevalence and Characteristics of Web Users: National Health Interview Survey, in Proceedings of the Annual Meeting of the American Statistical Association, Section on Survey Research Method, pp.2014-2015.[Link]
[131] Kirstin, E., Jennifer, M., and Fienberg, S. E. (2017). Dynamic Question Ordering in Online Surveys. Journal of Official Statistics, 33(3), 625-657.[Link]
[132] Khare, M. (2016). Estimated Prevalence and Characteristics of Web Users: National Health Interview Survey, In Proceedings of the Annual Meeting of the American Statistical Association, Section on Survey Research Method, JSM 2016, pp.2014-2015.
[133] Kreuter, F. and Casas-Cordero, C. (2010). Paradata, Working Paper No. 136, RatSWD. Working Paper Series of the Council for Social and Economic Data. [Link]
[134] Kreuter, F., Couper, M. P., and Lyberg, L. (2010). The Use of Paradata to Monitor and Manage Survey Data Collection. In Proceedings of the Annual Meeting of the American Statistical Association, Section on Survey Research Methods, JSM -2010, pp. 282-296.[Link]
[135] Kreuter, F. and Orson, K. (2013). Paradata for Nonresponse Error Investigation. In Kreuter, F. (ed.), Improving Surveys with Paradata -Analytic Uses of Process Information (pp. 13-42), Wiley Series in Survey Methodology, John Wiley & Sons.[Link]
[136] Krueger, B. S. and West, B. T. (2014). Assessing the Potential of Paradata and Other Auxiliary Data for Nonresponse Adjustments. Public Opinion Quarterly, 78, 795-831.
[137] Kruskal, W. and Mosteller, F. (1980). Representative Sampling, IV: The History of the Concept in Statistics, 1895-1939. International Statistical Review, 48(2), 169-195.
[138] Kuusela, V. (2011). Paradigms in Statistical Inference for Finite Populations―Up to the 1950s―, Research Reports 257, Statistics Finland.
[139] Laflamme, F., Maydan, M., and Miller, A. (2008). Using Paradata to Actively Manage Data Collection Survey Process. In Proceedings of the Annual Meeting of the American Statistical Association, Section on Survey Research Methods, JSM -2008, pp. 630-637.[Link]
[140] Lau, A., and Kennedy, C. (2019). When Online Survey Respondents Only ‘Select Some That Apply’ - Forced-choice questions yield more accurate data than select-all-that-apply lists -, May 9, 2019.[Link]
[141] Lavallée, P. (2014). Indirect Sampling for Hard-to-Reach Populations. In Tourangeau, R., Edwards, B., Johnson, T. P., Wolter, K. M., and Bates, N. (eds.), Hard-to-Survey Populations, Cambridge University Press.
[142] Lavrakas, P. J. (ed.) (2008). Encyclopedia of Survey Research Methods (2 Volume Set), SAGE.
[143] Lebrasseur, D., Morin, J.-P., Rodrigue, J.-F. and Taylor, J. (2010). Evaluation of the Innovations Implemented in the 2009 Canadian Census Test. In Proceedings of the American Statistical Association, Section on Survey Research Method, JSM-2010, pp. 4089-4097.[Link]
[144] Link, M. W. (2010). Address Based Sampling: What Do We Know So Far? American Statistical Association Webinar, November 2010.
[145] Link, M. W. et al. (2014). Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys: Executive Summary of the AAPOR Task Force on Emerging Technologies in Public Opinion Research, Public Opinion Quarterly, 78, 779-787.[Link]
[146] Lipps, O. and Pekari, N. (2016). Sample Representation and Substantive Outcomes Using Web With and Without Incentives Compared to Telephone in an Election Survey. Journal of Official Statistics, 32(1), 165-186.[Link]
[147] Liu, M. (2016). Comparing Data Quality between Online Panel and Intercept Samples. Methodological Innovations, 9, 1-11.[Link]
[148] Lohr, S. L. (2010). Sampling: Design and Analysis (second edition), Brooks/Cole.
[149] Lynn, P. and Nicolaas, G. (2010). Making Good Use of Survey Paradata. Survey Practice, 3(2).[Link]
[150] Macer, T. (2003). We Seek Them Here, We Seek Them There-How Technical Innovation in Mixed Mode Survey Software is Responding to the Challenge of Finding Elusive Respondents. In Banks, R. et al. (eds.), Survey and Statistical Computing IV. The Impact of Technology on the Survey Process (Association for Survey Computing). Paper presented at the ASC 4th International Conference.[Link]
[151] Maitland, A., Casas-Cordero, C., and Kreuter, F. (2009). An Evaluation of Nonresponse Bias Using Paradata from a Health Survey. In Proceedings of the Annual Meeting of the American Statistical Association,Section on Government Statistics, JSM-2009, pp. 370-378.[Link]
[152] Malhotra, N. (2008). Completion Time and Response Order Effects in Web Surveys. Public Opinion Quarterly, 72, 914-934.
[153] Manfreda, K. L. and Vehovar V. (2008). Internet Survey. In de Leeuw, E. D. et al. (eds.), International Handbook of Survy Methodology (pp. 264-284), Routledge (Taylor and Francis Group).
[154] Marketing Systems Group. GENESYS, Marketing Systems Groups, Pro and Cons of Address Based Sampling.[Link]
[155] Matthijsse, S.M., de Leeuw, E.D., & Hox, J.J. (2015). Internet panels, professional respondents, and data quality. Methodology, 11 (3), 81- 88.
[156] Mavletova, A. and Couper, M. P. (2014). Mobile Web Survey Design: Scrolling versus Paging, SMS versus E-mail Invitations. Journal of Survey Statistics and Methodology, 2(4), 498-518.
[157] Mavletova, A. and Couper, M. P. (2013). Sensitive Topics in PC Web and Mobile Web Surveys: Is There a Difference? Survey Research Methods, 7(3), 191-205.[Link]
[158] McGeeney, K. and Yan, H.Y. (2016). Text Message Notification for Web Surveys - Sending texts to survey panel members shortens response time -, September 7, 2016.
[159] McGeeney, K. and Yan, H.Y. (2016). Text Message Notification for Web Surveys- Sending texts to survey panel members shortens response time-. Pew Research Center, September 7, 2016.[Link]
[160] Mehrabian, A. and Russell, J. A. (1974). An Approach to Environmental Psychology, MIT Press.
[161] Mercer, A., Lau, A., and Kennedy, C. (2018). For Weighting Online Opt-In Samples, What Matters Most? - The right variables make a big difference for accuracy. Complex statistical methods, not so much -. January 26, 2018.[Link]
[162] Millar, M. M. and Dillman, D. A. (2012). Encouraging Survey Response via Smartphones. Survey Practice, 5(3).[Link]
[163] Miller, T. W. (2001). Can We Trust the Data of Online Research? Marketing Research, 13(2), 26-33.[Link]
[164] Mockovak, W. and Powers, R. (2008). The Use of Paradata for Evaluating Interviewer Training and Performance. In Proceedings of the Annual Meeting of the American Statistical Association, Section on Survey Research Methods, JSM-2008, pp. 1386-1393.[Link]
[165] Mohorko, A., de Leeuw, E., and Hox, J. (2013). Internet Coverage and Coverage Bias in Europe: Developments Across Countries and Over Time. Journal of Official Statistics, 29(4), 609-622.[Link]
[166] Morgan, M., Millar, M. M., and Dillman, D. A. (2011). Improving Response to Web and Mixed-Mode Surveys. Public Opinion Quarterly, 75, 249-269.
[167] National Research Council (2013). Nonresponse in Social Science Surveys: A Research Agenda. Washington, DC: The National Academies Press.
[168] Nicolaas, G. (2011). Survey Paradata: A Review, ESRC National Centre for Research Methods Review Paper, NCRM/017. National Centre for Research Methods.[Link]
[169] Nicolaas, G., Calderwood, L., Lynn, P., and Roberts, C. (2014). Web Surveys for the General Population: How, Why and When? National Centre for Research Methods Report.[Link]
[170] Nicolaas, G., Campanelli, P., Hope, S., and Jäckle, A. (2015). Revisiting “yes/no” versus “check all that apply”: Results from a Mixed Modes Experiment. Survey Research Methods, 9(3), 189-204.[Link]
[171] OECD(2013). Glossary of Statistical Terms. [Link]
[172] Olson, K. (2013). Paradata for Nonresponse Adjustment. The Annals of the American Academy of Political and Social Science, 645(1), 142-170.[Link]
[173] Olson, K. and Parkhurst, B. (2013). Collecting Paradata for Measurement Error Evaluations. In Kreuter, F. (ed.), Improving Surveys with Paradata -Analytic Uses of Process Information (pp. 43-72), Wiley Series in Survey Methodology, John Wiley & Sons.
[174] Olson, K. and Wagner, J. (2015). A Feasibility Test of Using Smartphones to Collect GPS Information in Face-to-Face Surveys. Survey Research Methods, 9(1), 1-13.
[175] Perrin, A. and Duggan, M. (2015). Americans’ Internet Access: 2000-2015 - As internet use nears saturation for some groups, a look at patterns of adoption -, June 26, 2015.[Link]
[176] Pew Research Center (2012). Assessing the Representativeness of Public Opinion Surveys. May 15,2012.[Link]
[177] Pew Research Center (2013). Internet Use and Home Broadband Connections: Demographics. July 24, 2012.[Link]
[178] Pew Research Center (2015). Home Broadband 2015. December 21, 2015.[Link]
[179] Poushter, J. (2013). Smartphone Ownership and Internet Usage Continues to Climb in Emerging Economies -But advanced economies still have higher rates of technology use. February 22, 2016. [Link]
[180] Raento, M., Oulasvirta, A., and Eagle, N. (2009). Smartphones: An Emerging Tool for Social Scientists. Sociological Methods & Research, 37(3), 426-454.
[181] Rivers, D. (2007). Sampling for Web Surveys. White Paper Prepared from Presentation Given at the 2007 Joint Statistical Meetings, Salt Lake City, Utah, July-August.[Link]
[182] Roese, N. J. and Jamieson, D. W. (1993). Twenty Years of Bogus Pipeline Research: A Critical Review and Meta-Analysis. Psychology Bulletin, 114(2), 363-375.[Link]
[183] Safir, A., Black, T., and Steinbach, R. (2001). Using Paradata to Examine the Effects of Interviewer Chracteristics on Survey Response and Data Quality. In Proceedings of the Annual Meeting of the American Statistical Association, Section on Survey Research Methods, JSM-2001, August 5-9, 2001.[Link]
[184] SAMHSA (Substance Abuse and Mental Health Services Administration) (2015). 2012 National Survey on Drug Use and Health, Createspace Independent.
[185] Saris, W. E. (1991). Computer-Assisted Inteviewing, Sage University Paper, Quantitative Applications in the Social Sciences No. 80, SAGE.
[186] Schaefer, D. R. and Dillman, D. R. (1998). Development of a Standard E-mail Methodology, Results of an Experiment. Public Opinion Quarterly, 62, 378-397.
[187] Schonlau, M. and Couper, M. P. (2017). Options for Conducting Web Surveys, Statistical Science, 32 (2), 279-292.[Link]
[188] Schonlau, M., Fricker, R. D., and Elliott, M. N. (2002). Conducting Research Surveys via E-Mail and the Web, RAND.
[189] Schonlau, M. and Toepoel, V. (2015). Straightlining in Web Survey Panels Over Time. Survey Research Methods, 9(2), 125-137.[Link]
[190] Schonlau, M., van Soest, A., Kapteyn, A., and Couper, M. P. (2006). Selection Bias in Web Surveys and the Use of Propensity Scores. RAND Working Papers, WR-279, RAND Labor and Population.[Link]
[191] Schonlau, M., Weidmer, B., and Kapteyn, A. (2014). Recruiting an Internet Panel Using Respondent-Driven Sampling. Journal of Official Statistics, 30(2), 291-310.[Link]
[192] Schwarz, N., Grayson, C. E. and Knäuper, B. (1998). Formal Features of Rating Scales and the Interpretation of Question Meaning. Research Note, International Journal of Public Opinion Research, 10 (2): 177-183.[Link]
[193] Sheatsley, P. B. and Mitofsky, W. J. (eds.) (1992). A Meeting Space: The History of the American Association for Public Opinion Research, Published by American Association for Public Opinion Research. [Link]
[194] Shook-Sa, B. E. (2014). Improving the Efficiency of Address-Based Frames with the USPS No-Stat File, Survey Practice, 7(4).[Link]
[195] Shropshire, K. O., Hawdon, J. E., and Witte, J. C. (2009). Web Survey: Design Balancing Measurement, Response, and Topical Interest. Sociological Methods & Research, 37(3), 344-370.
[196] Simmons, K., Mercer, A., Schwarzer, S., and Kennedy, C. (2016). Evaluating a New Proposal for Detecting Data Falsification in Surveys The underlying causes of “high matches” between survey respondents, Pew Research Center Report, February 23, 2016.[Link]
[197] Simmons, K., Mercer, A., Schwarzer, S., and Kennedy, C. (2016). Evaluating a New Proposal for Detecting Data Falsification in Surveys - The underlying causes of “high matches” between survey respondents. Statistical Journal of the IAOS, 32 (3), 327-338.[Link]
[198] Singer, E. and Couper, M. P. (2011). Ethical Considerations in Internet Surveys. In Das, M., Ester, P., and Kaczmirek, L. (eds.), Social and Behavioral Research and the Internet-Advances in Applied Methods and Research Strategies, (pp. 133-162), Routledge (Taylor and Francis Group).
[199] Sinibaldi, J., Durrant, G. B., and Kreuter, F. (2013). Evaluating the Measurement Error of Interviewer Observed Paradata. Public Opinion Quarterly, 77, 173-193.
[200] Smith, T. W. (1995). Little Things Matter: A Sampler of How Differences in Questionnaire Format Can Affect Survey Responses. In Proceedings of the American Statistical Association, Section on Survey Research Methods, pp. 1046-1051.[Link]
[201] Smith, T. W. (2011). Refining the Total Survey Error Perspective. International Journal of Public Opinion Research, 23(4), 464-484.[Link]
[202] Stange, M., Barry, A., Smyth, J., and Olson, K. (2018). Effects of Smiley Face Scales on Visual Processing of Satisfaction Questions in Web Surveys. Social Science Computer Review, 36(6), 756-766. [Link]
[203] Statistics Canada (2009). Statistics: Power from Data! Nonprobability Sampling. [Link]
[204] Stoop, I. and Wittenberg, M. (2008). Access Panel and Online Research, Panacea or Pitfall? Proceedings of the DANS Symposium, Amsterdam, October 12th, 2006, DANS-Data Archiving and Networked Services.
[205] Sue, V. M. and Ritter, L. A. (2012). Conducting Online Surveys (second edition), SAGE.
[206] Terhanian, G. and Bremer, J. (2012). A Smarter Way to Select Respondents for Surveys? International Journal of Market Research, 54(6), 751-780. [Link]
[207] Terhanian, G. and Bremer, J. (2000). Confronting the Selection-Bias and Learning Effects Problems Associated with Internet Research. Research Paper: Harris Interactive.
[208] Thalji, L. Hill, C. A., Mitchell, S., Suresh, R., Speizer, H., and Pratt, D. (2013). The General Survey System Initiative at RTI International: An Integrated System for the Collection and Management of Survey Data. Journal of Official Statistics, 29(1), 29-48.
[209] Toepoel, V. (2016). Doing Surveys Online, SAGE.
[210] Toepoel, V. and Couper, M. P. (2011). Can Verbal Instructions Counteract Visual Context Effects in Web Surveys? Public Opinion Quarterly, 75, 1-18.[Link]
[211] Toepoel, V. and Lugtig, P. (2014). What Happens if You Offer a Mobile Option to Your Web Panel? Evidence From a Probability-Based Panel of Internet Users. Social Science Computer Review, 32(4), 544-560.[Link]
[212] Toepoel, V., Vis, C., Das, M., and van Soest, A. (2009). Design of Web: Questionnaires An Information-Processing Perspective for the Effect of Response Categories. Sociological Methods & Research, 37(3), 371-392.[Link]
[213] Toninelli, D. and Revilla, M. (2016). Smartphones vs PCs: Does the Device Affect the Web Survey Experience and the Measurement Error for Sensitive Topics?-A Replication of the Mavletova & Couper’s 2013 Experiment. Survey Research Methods, 10(2), 153-169.[Link]
[214] Tourangeau, R. (2017). Mixing Modes: Tradeoffs among Coverage, Nonresponse, and Measurement Error. In Biemer, P., de Leeuw, E. D. , Eckman, S., Edwards, B., Kreuter, F., Lyberg, L. E., Tucker, N. C., and West, B. T. (eds.) , Total Survey Error in Practice (Chapter 6), John Wiley & Sons.
[215] Tourangeau, R. Couper, M. P., Conrad, F. G.(2013).“Up means good”: The effect of screen position on evaluative ratings in Web surveys. Public Opinion Quarterly, 77(5), 69-88.
[216] Tourangeau, R., Edwards, B., Johnson, T. P., Wolter, K. M., and Bates, N. (eds.) (2014). Hard-to-Survey Populations, Cambridge University Press.
[217] Tourangeau, R., Maitland, A., Rivero, G., Sun, H., Williams, D., and Yan, T. (2017). Web Surveys by Smartphone and Tablets: Effects on Survey Responses. Public Opinion Quarterly, 81, 896-929.[Link]
[218] United States Postal Service (2016). CDS User Guide: October 2016. [Link]
[219] Vannieuwenhuyze, J. T. A. (2014). On the Relative Advantage of Mixed-Mode versus Single-Mode Surveys. Survey Research Methods, 8(1), 31-42.[Link]
[220] Verma, S. K., Courtney, T. K., Lombardi, D.A., Chang, W.., Huang, Y., Brennan, M.J., and Perry, M.J. (2014). Internet and Telephonic IVR Mixed-mode Survey for Longitudinal Studies: Choice, Retention, and Data Equivalency. Annals of Epidemiology, 24(1), 72-74.[Link]
[221] Villar, A., Callegaro, M., and Yang, Y. (2013). Where Am I? A Meta-Analysis of Experi-ments on the Effects of Progress Indicators for Web Surveys. Social Science Computer Review, 31(6), 744-762.[Link]
[222] Voogt, R. J. J. and Saris, W. E. (2005). Mixed Mode Designs: Finding the Balance Between Nonresponse Bias and Mode Effects. Journal of Official Statistics, 21(3), 367-387.[Link]
[223] Wagner, J., West, B. T., Kirgis, N., Lepkowski, J. M., Axinn, W. G., and Ndiaye, S. K. (2012). Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection. Journal of Official Statistics, 28(4), 477-499.[Link]
[224] Wang, W., Rothschild, D., Goel, S., and Gelman, A. (2015). Forecasting Elections with Non-Representative Polls. International Journal of Forecasting, 31(3), 980-991.[Link]
[225] Weisberg, H. F. (2005). The Total Survey Error Approach: A Guide to the New Science of Survey Research, University of Chicago Press.
[226] West, B. T. (2011). Paradata in Survey Research. Survey Practice, 4(4).[Link]
[227] Wolf, C., Joye, D., Smith, T. W., and Fu, Y. (eds.) (2016). The Sage Handbook of Survey Methodology, SAGE.
[228] Yates, F. (1949). Sampling Methods for Censuses and Surveys, London, C. Griffin.
[229] Zhang, C. and Conrad, F. G. (2014). Speeding in Web Surveys: The Tendency to Answer Very Fast and its Association with Straightlining. Survey Research Methods, 8(2), 127-135.[Link]
[230] Zhang, C. and Conrad, F. G. (2018). Intervening to Reduce Satisficing Behaviors in Web Surveys: Evidence From Two Experiments on How It Works. Social Science Computer Review, 36(1) 57-81.[Link]