Drawing up questions and assessing data are termed 'creative' processes

March 14, 1984

RADIO comedian Fred Allen once observed, ''Public opinion pollsters are people who count the grains of sand in your birdcage and then try to tell you how much sand there is on the beach.''

Mr. Allen's target was ''sampling theory,'' the set of statistical rules and procedures that determine whom a pollster interviews. But if he was trying to nail down in-accuracy in the polls, his aim was way off, according to most experts.

Sampling therory is ''as scientific as anything around,'' says Tom Smith, survey study director at the national Opinion Research Center at the University of Chicago. The findings from a carefully constructed sample of 1,500 people should closely approximate the opinions held by the whole US population 95 out of 100 times.

Louis Harris & Associates president Humphrey Taylor likes to use a homespun analogy to explain sampling theory. It's similar to tasting a spoonful of soup to see how good the whole pot is, he says. What matters is that the sample, the spoonful, if you will, be representative of the whole. To ensure such representativeness in public opinion surveys, responsible polling organizations make extensive use of census data, constantly updating to make sure their selection of respondents keeps pace with demographic shifts.

Interviewers contact people in all regions of the country. If a person cannot be reached, the firms either make a determined effort to call back (in telephone surveys) or attempt to contact someone who fits the same dempgraphic profile. The main thing is to stick as closely as posible to the original, carefully chosen sample, usually of 1,200 to 1,500 people. This is a large part of what opinion researchers mean by ''rigor.''

But polling strategies used today cover ''a broad spectrum of methodological rigor,'' says Steve Heeringa, a statistician with the Institute for Social Research at the University of Michigan. The number of people polled may drop (with the margin of error correspondingly rising), or the effort to call back may be halfhearted.

To qualify as ''scientific,'' Mr. Heeringa explains, a finding usually needs ''repeatablilty'' -- researchers using the same methods should be able to duplicate the process and arrive at pretty much the same results. This can rarely be done in the realm of public opinion. As Heeringa puts it, you're dealing with a ''transient estimate''; opinion rarely stands still.

Everett Carll Ladd Jr. of the University of Conneticut's Roper Center takes this line of reasoning a step further. ''I think the poll questions are . . . like the blind man before the elephant. You're feeling toward something, and, if you ask enough questions and take the time, you can piece it all together and come up with something very interesting.''

Pollster Burns Roper recalls a question asked before the United States entered World War II. it was supposed to assess Americans' attitudes toward helping Britain, but it was phrased two ways -- one version said ''cooperating'' with Britain, the other said ''collaborating.'' The results were nearly opposite. People liked the idea of coperating but felt, apparently, that collaborating sounded underhanded.

Those, clearly, were loaded words. Choosing the least loaded ones demands judgment and a sensitivity to the nuances of language. The Roper, Harris, Gallup , and Yankelovich organizations, among others, employ an ectensive editing procedure, as well as a pretesting of questions on 15 to 25 randomly chosen people, in a never-ending effort to fend off ambiquity.

Judy Glass, senior associate with Yankelovich, Skelly & White, calls the formulation of questions a ''creative'' process. ''It's just a matter of using one's intelligence and experience -- there's no pat way of doing this, '' she says. ''You pretest, and you bounce the questions off different components within the organization.''

One subject that has been widely discussed in the last few years is the need to design questions that get at the intensity of a respondent's feelings about a candidate or an issue. How much someone knows about a subject, for instance, can become a crucial factor.

If you don't make an effort to separate the informed from th uninformed, says George Gallup Jr., ''you simply get results based on everything weighted the same. I can give you lots of examples of instances where you get a totally different picture when you base the findings on the informed group.''

The fact remains, though, that great numbers of people simply don't know much about many important but terribly complex issues. Should a pollster try to assess opinion where none really exists? Mr. Roper says he has toyed with the idea of setting a rule that ''if it takes more words to describe a situation than it takes to ask the question, maybe we shouldn't be doing it.''

From his vantage point at the University of Conneticut's Roper Center, Mr. Ladd examines questions and results from surveys conducted by all the major polling firms. He finds an ''enormous number of soft and spongy data points.'' But by analyzing what a variety of polls have found on a subject, as he does each month for Public Opinion magazine, he can do the necessary peicing together. ''Every time I do it a pattern emerges. It can be a colorful ambivalence; you may see two values at odds; and you can see that certain questions are triggering misleading things.''