According to Steven Shepard’s article in the National Journal:
“The days of accurate telephone polling are numbered. With more and more Americans dropping their landline service, reliable phone surveys are becoming prohibitively expensive for news organizations and nonprofit groups with tight budgets. Many news outlets are choosing to forgo the rigorous survey research they have commissioned for decades…. With consumer behavior upending traditional polling methods, Zukin of Rutgers predicts that pollsters will stop conducting dual-frame phone surveys (contacting both landline and cell-phone users) “within the next five years. I think we’re going to survey people with whatever mode they wish. That means Internet- and smartphone-based surveys, Zukin says. Indeed, a significant number of the sessions at the pollsters’ conference focused on this kind of research, which uses a methodology known as non-probability, or nonrandom sampling. In many cases, these surveys are completed by respondents who “opt-in,” clicking on a link to complete a poll or joining a Web-based panel (or downloading a smartphone application) to complete surveys—usually with monetary incentives or rewards given for doing so.”
The entire article is worth reading and captures many of the concerns I’ve heard informally from my colleagues who do this type of work. To me, it sounds like the death knell of survey research, at least in terms of being able to gather data that are representative of the entire population, unless researchers have access to significant financial resources.
“Some critics in the polling community are highly skeptical of this type of research. Yes, Internet pollsters can create and weight their panels to reflect the public at large, using demographic information to make their samples more representative, they say, but that kind of weighting can serve in some cases to further distort unreliable data.”
I think these criticisms are right. The only solution is to use web-based panels to target specific publics, rather than representative samples, and to acknowledge upfront the strengths and weaknesses of these techniques.
Weighting is not a solution. Indeed, I think weighting of any type of dataset, however it is collected, is problematic. Rarely do you see academic journal articles report findings both with and without weights. Yet I’ve heard that for some published articles, some of the reported findings that are significant with weights, are either significant at less robust levels or are no longer significant once the weights are removed.
UPDATE: Here’s an article by Andrew Gelman on survey weighting.