Opinion polls; Is the press fumbling the data?

TALL, dignified Burns Roper, head of the Roper Organization, can give off an air of professorial reserve. But get him talking on the right subject and his relish for a good story takes over.

That's what happens when he tells about the shock he had while watching the network news several years ago. John Chancellor was announcing the results of the latest NBC poll. Sixty-eight percent of the American public, said the anchor man, had been found to approve of the Salt II arms limitation treaty.

Mr. Roper's jaw dropped. He had just looked at his own survey finding on SALT II that afternoon. It had shown only 32 percent in favor of the treaty. Why the huge disparity? The samples - the numbers of people polled and their geographic diversity - were similar, and the surveys were conducted at about the same time. But what was different, Roper discovered later, was the wording of the questions.

''They never mentioned SALT II,'' Roper says. ''They never mentioned the words.'' As he recalls it, the network simply asked something like, ''Would you favor or oppose an agreement between the US and Russia which would lessen the danger of nuclear war?'' His own question, on the other hand, specified SALT II, explained how many missiles it would eliminate, spelled out some of the controversy, and then asked whether the Senate should approve the treaty. Given this background information, relatively few respondents said ''yes.''

That experience illustrates Roper's greatest concern about TV-news polls: ''the fact that they almost never indicate the questions they've asked.'' That information, experts agree, is needed to assess the trustworthiness of a survey.

Networks and TV stations may argue that they cannot devote valuable air time to such material. But if it's a matter of giving the audiences the actual wording of questions or the standard-sampling-error statement, says Roper, choose the former. In his view (and that of most other experts), it's by far the more important factor in the accuracy of a survey.

The growth of media-run polls has also led some to wonder whether in-house polling is nudging some news organizations out of their accustomed news-gathering role into a news-generating one.

The possibility that emphasis on polls may interfere with the press's traditional role of digging out stories is ''an ever-present risk,'' says Michael Traugott, a political scientist with the University of Michigan's Institute for Social Research, who helps conduct polls for the Detroit News. ''Sometimes, when it comes to politics, one is tempted to ask if what is newsworthy becomes a reflection of what's being asked in the surveys,'' he says.

Political commentator Richard Scammon, on the other hand, sees ''the state of public opinion'' as ''a legitimate news item for any news organization.'' Don't forget the crucial part played by the polls and poll-related news stories in recording the public's growing disenchantment with Richard Nixon during the Watergate period, he admonishes.

What concerns Albert H. Cantril, director of the Washington-based Bureau of Social Science Research Inc., is the extent to which political reporters rely on poll data to shape their stories. He says the job of the reporter is to unearth a story, not to be ''driven by poll data.''

Another issue: Do journalistic biases sometimes creep into a reporter's or editor's decision on how to interpret survey data?

The tendency is to go for the striking finding that makes good copy, although the whole body of findings may paint a different picture, says Michael Wheeler, who teaches at New England School of Law and at Harvard University. In 1975 he wrote a biting critique of the polls titled with Benjamin Disraeli's phrase, ''Lies, damned lies, and statistics.'' The press is often ''preoccupied with small blips which could be merely statistical accidents,'' he asserts. He has ''compassion for the pollsters,'' he says, since what is repeated in the press may not really reflect their findings.

A case in point is offered by Everett Carll Ladd Jr., head of the University of Connecticut's Roper Center, a clearinghouse for public opinion research findings. After the 1982 congressional elections, Mr. Ladd recalls, some prominent political columnists began speculating about a decline in Ronald Reagan's popularity. Some even went so far as to talk about a ''phase-out'' of the Reagan administration. But the mass of survey data Ladd was then analyzing, coming from a number of polls, indicated to him that the President's popularity was holding up well in view of the faltering economy.

''It seemed to me that the data clearly supported the opposite conclusion from what most of the poll stories in the media were saying,'' Ladd says. ''The big story was that, in spite of provocation, the administration's popularity bent but didn't break. But that was not the story I was reading. The wish was father to the perception, and particular pieces of data were used for that purpose.''

So how knowledgeable are journalists when it comes to surveying public opinion and interpreting their findings?

''I think the press is getting better, although it's a slow process,'' says Mr. Traugott. He and others see recent articles discussing how the sequence of questions can shape responses as signs of a growing awareness among newspeople of the complexities of the task.

What's needed, says Ladd, are ''mechanisms'' to help nurture this awareness. The Roper Center, with backing from the Exxon Educational Foundation, will soon set up one such mechanism: a program of summer training workshops for journalists who have to deal with public opinion data. The first one will be held this summer at Williams College in western Massachusetts.

QR Code to Opinion polls; Is the press fumbling the data?
Read this article in
https://www.csmonitor.com/1984/0313/031307.html
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe