How can they know what I think?

If you were running for president, you'd be campaigning hard. You'd fly from city to city to speak at rallies, telling cheering audiences all the good things you've done and plan to do - and all the bad things your opponent has done and plans to do. Your campaign would be running ads in newspapers, on billboards, and on TV.

Whew! All this campaigning is tiring and expensive. To save money and energy, you need to visit the right places. Why go where you don't have a chance? Why go where you're the big favorite? Go where you might tip the balance in your favor - or keep it from tipping away from you. But how do you know where to go? You need to know what people think. You need polls.

At one time, presidential candidates had armies of helpers who gathered voters' opinions. They were called "precinct captains." Each political party tried to have a precinct captain in every neighborhood. In my neighborhood, Flossie was precinct captain. She usually visited our house in October, just a few weeks before the election. She'd ask how we were doing, share some news, and ask if we were voting for her party's candidate.

She and other precinct captains would report to the local political party office. The national party would eventually study all the compiled reports to see where a candidate was strong and where he was weak.

Flossie was one of the last precinct captains. She did it for almost 40 years. During that time political parties figured out a faster, simpler, and cheaper way: polls. National polls ask a small number of people - sometimes as few as 400, often no more than 1,000 - to find out what voters across the country are thinking.

Political polls became common in the 1930s. The 1920 presidential election (Warren Harding vs. James Cox) posed a new problem for people who wanted to gauge public opinion. Can you guess why? The 19th Amendment to the United States Constitution had just been passed. The amendment gave women the right to vote. Suddenly, the nation had twice as many potential voters as before.

Many experts thought that women would vote for the candidate favored by their husbands or fathers. They were right (for a while), but no one knew that at the time. People were still figuring out how to do polls.

Early poll-takers started with what they knew. They knew how many voters were registered. They knew how many were Democrats, Republicans, and Independents. So if half the city's voters were Republicans, pollsters made sure that half the people they asked were Republicans, too, and so on. This simple method worked surprisingly well from 1920 to 1936. Polls predicted presidential outcomes pretty well. But the elections were not very close. Harding won easily in 1920. So did Calvin Coolidge in 1924), Herbert Hoover in 1928), and Franklin Roosevelt in 1932.

In 1936, pollsters hit a bump. A poll by Literary Digest magazine of its readers predicted that Republican Alf Landon would defeat Franklin Roosevelt. Instead, Roosevelt was reelected by a landslide. What had gone wrong?

The poll had failed to identify some groups of people who had changed the way that they voted. By 1936, the "gender gap" was beginning to appear. Women weren't voting the same way men did. Also, many African-Americans, once devoted to the party of Abraham Lincoln (the Republican party), began to support the Democrats. The Literary Digest poll didn't discover this.

An up-and-coming pollster named George Gallup claimed that the Literary Digest poll failed because it was not "scientific." The pollsters did not take enough care in deciding whom to ask. Gallup had correctly predicted FDR's victory using a poll that relied on two techniques still used today: stratification and randomization.

When you stratify, you divide the country's population into groups of people who tend to hold similar opinions. If the population is 52 percent women, for example, you try to be sure that 52 percent of the people asked are women. If 45 percent are registered Republicans, your poll "sample" should be 45 percent Republican. You can also create strata for workers, farmers, young voters, old voters, urban residents, and suburban "soccer moms."

If you choose the right strata and represent them accurately in a poll, you get a better "snapshot" of public opinion. For example, older voters tend to be more conservative (and vote more regularly) than younger voters. The 1936 Literary Digest poll tried to stratify, but asked too many rich conservatives and not enough liberals and African-Americans.

National polls have had other embarrassing moments. In 1948, a gleeful Harry Truman held up a copy of a newspaper with a banner headline declaring that he had lost to Thomas Dewey. The truth was quite the reverse.

In 1980, pollsters missed another important group when Ronald Reagan defeated Jimmy Carter. Polls said the race would be close. It wasn't. The polls missed the fact that many urban Democrats opposed to abortion were planning to vote Republican.

To try to avoid such surprises, pollsters choose participants randomly. If they need to question 50 men from California, for example, they choose them at random. That gives them a chance of uncovering new trends among voters.

Every poll is imprecise. The fine print says the results are accurate to within "plus or minus 3 percent." That means that a poll result of 60 percent "for" and 40 percent "against" might really be 63 for and 37 against, or 57 and 43 - or somewhere in between. When poll numbers are too close to each other, pollsters say that there's no clear winner.

Here's another way to look at polls: Picture a large bowl of red and green jelly beans mixed together. You know the total number of jelly beans, but you want to know how many there are of each color.

Statistical theory states that a random sample will give you a good representation of the whole, most of the time. To get a random sample, stick your hand in the bowl and grab a small handful of jelly beans. The proportion of red and green jelly beans in your sample should be close to the ratio of red and green jellybeans in the bowl.

Sometimes you may hit a pocket of red jelly beans or get a few too many green ones. If you repeat this process a few times, though, the results will average out to be quite close to the true proportion of red to green jelly beans.

Alas, voters are not jelly beans, and taking an accurate poll requires something more than sticking your hand in a bowl. This is where stratification, randomization, margin of error, probability, and other complications may come in.

But remember: The best, most accurate national political "poll" will be taken next week, on Election Day. And the results should be highly accurate.

Now it's your turn: Conduct your own scientific poll

Suppose your middle school is holding an election for student body president. How might you predict the winner? You could design a poll. Assuming your middle school has three grades, a poll of 60 students should do it - if you ask the right ones.

How many student 'neighborhoods' are represented in your school? A pollster might ask that question this way: How many 'strata' should we sample to get an accurate poll?

Let's keep it our poll simple. We'll stratify by grade and gender. Students in each grade probably think differently about the election. They also tend to hang out with people in their own grade. So let's plan to interview 20 kids in each grade - sixth, seventh, eighth.

Girls and boys probably think differently about the election, too. They also tend to move in separate social circles. Let's further 'stratify' our sample so that we talk to 10 boys and 10 girls in each grade (assuming the grades are about half boys, half girls).

What other strata should we sample? Do athletes think differently from musicians? 'A' students from 'C' students? Does it matter where students live?

Some of these strata may be important and some may not. Rather than try to figure out which is which, we will 'randomize' our poll instead. Asking people at random will help us capture the opinions of some of the strata we didn't identify. We'll choose 10 people at random from each stratum.

One way to do this is to put the names of all the sixth-grade girls (for example) on slips of paper. Put the slips in a bowl, stir them up, and draw out 10 names. Do the same for the sixth-grade boys, and so on. The names you draw are your interview subjects.

It takes time to talk to 60 people. You'll need help. But make sure that you and your fellow poll-takers ask the same questions and in the same order. That's what real poll-takers do. Questions must be worded carefully. 'Whom do you support for student-body president?' is your first question.

Pollsters would ask some follow-up questions as well, to cross-check the response to the first question. A follow-up question on a national political poll asking voters whom they support for president might be: 'Do you think the American economy is being managed well?' Supporters of George W. Bush would tend to agree with that, while Senator Kerry's backers probably wouldn't. The answers to the follow-up questions should confirm the answer to the primary question. If they don't, something may be wrong with your poll.

We want to hear, did we miss an angle we should have covered? Should we come back to this topic? Or just give us a rating for this story. We want to hear from you.