Over about the last 60 years most have used a probability-sampling framework. CONCLUSIONS REFERENCES APPENDIX A: AAPOR NON-PROBABILITY TASK FORCE MISSION STATEMENT Survey researchers routinely conduct studies that use different methods of data collection and inference.Surveys at the lower and upper ends of the continuum are relatively easy to recognize by the effort associated with controlling the sample and post hoc adjustments.
REPORT OF THE AAPOR TASK FORCE ON NON-PROBABILITY SAMPLING Reg Baker, Market Strategies International and Task Force Co-Chair J. Gile, University of Massachusetts Amherst Roger Tourangeau, Westat June 2013 ACKNOWLEDGEMENTS A number of individuals beyond the members of the Task Force made important contributions to this report by providing review and feedback throughout. In the fall of 2011 the AAPOR Executive Council appointed a task force “to examine the conditions under which various survey designs that do not use probability samples might still be useful for making inferences to a larger population.” A key feature of statistical inference is that it requires some theoretical basis and explicit set of assumptions for making the estimates and for judging the accuracy of those estimates.
Michael Brick, Westat and Task Force Co-Chair Nancy A. They include: Robert Boruch, The Wharton School of the University of Pennsylvania Mario Callegaro, Google Mitch Eggers, Global Market Insite David P. We consider methods for collecting data and producing estimates without a theoretical basis as not being appropriate for making statistical inferences.
These models typically attempt to use important auxiliary variables to improve fit and usability.
Once the model is formulated, standard statistical estimation procedures such as likelihood-based or Bayesian techniques are then used to make inferences about the parameters being estimated.
One approach is sample matching, which has been used for observational studies for many years and has recently been advocated for use in surveys that use opt-in panels.