Political Analysts Debate Value of Polls
Those never-ending numbers that seem to drive campaigns - a letter from Washington
WASHINGTON — READING about the latest opinion poll is like eating Chinese food: It's enjoyable, but an hour later you're hungry. So says veteran television reporter Marvin Kalb, one of a growing cadre of journalists, academics, and pollsters who are questioning the place of opinion polls in political reporting.
"I love them, but they do little to inform public opinion," says Thomas Mann, Brookings Institution scholar and co-editor of the new book "Media Polls in American Politics."
The problem is that the proliferation of polls has led the media to focus on the "horse race" at the expense of other content.
Varying polling results - the result of legitimate differences in timing, phrasing and order of questions, and sampling techniques - confuse and frustrate the public. They also can create a bandwagon effect and influence the way the press covers a candidate.
Polls can devastate a candidate's fund-raising ability, as 1988 Democratic nominee Michael Dukakis found out. On the eve of the second presidential debate, says Mr. Kalb, ABC-TV anchor Peter Jennings's reporting of the network's 50-state poll created the impression that he had no chance to catch George Bush.
After that, "the money simply stopped coming in to the Dukakis camp," Kalb says.
On the plus side, polling gives voice to the citizenry in a process that seems all too dominated by political handlers. Opinion polls also can help keep journalists honest by challenging conventional wisdom and "getting them off their bar stools," says Mr. Mann, speaking at a recent Brookings session.
Michael Kagay, director of polling for the New York Times and a contributor to the Brookings book, says that although the different results can be initially confusing, taking the average of at least five major polls on the same subject will paint a true picture of public opinion.
In the end, the book's authors ask, are polls a boon or a menace to democracy?
The answer is both. But to move the balance closer to "boon" than "menace," the authors make several suggestions:
* Conduct fewer but better polls. Polling outfits should take larger samples, make more callbacks, ask more questions about fewer subjects, and analyze the results carefully.
* Cut back on "quicky" polls. Overnight surveys gauging reaction to a particular event are especially subject to error.
* Stop labeling as "public opinion polls" any call-in requests to 800- and 900- numbers. These are not scientific, and if advertised to be so can be manipulated.
* Use polls as an element of an article, not the focus. "Findings from other polls, aggregate data, historical patterns, and intensive interviews with citizens should be used to interpret poll results and put them in context," the authors write.
* Avoid hyping poll results. Often too much is made of polling results where opinion is unstable.
* More analysis. Particularly at state and local levels, pollsters should consult outside analysts to check for distortions.
* More disclosure of sampling and methodology. Response rates are an important element in determining the reliability of a poll; the authors recommend establishment of a minimum standard for publication.
As with any prescription for change that comes out of a think tank, participants wonder out loud if anyone will listen. Indeed, Kalb, now director of Harvard's Shorenstein Barone Center for media studies, notes that a post-'88-election conference produced a recommendation for fewer polls - and that the number of polls continues to rise.