The Power of Statistics To Affect Lives - Even When They're Wrong
It has been more than a decade since researcher Lenore Weitzman first captured headlines with dramatic statistics about the economic consequences of divorce. In her book "The Divorce Revolution," Dr. Weitzman, then an associate professor of sociology at Stanford University, reported that women's standard of living declined 73 percent in the first year after a divorce, while men's standard of living improved by 42 percent.
Reporters, talk show hosts, and book reviewers described her findings as "staggering" and "startling." So "staggering," in fact, that Weitzman's statistics have received attention in more than 100 national magazines and newspapers, nearly 350 social-science journal articles, and more than 250 law review articles. They have been cited in at least 24 state appellate and supreme court decisions and once by the United States Supreme Court. Critics of no-fault divorce have also used them as evidence of what they regard as the disastrous results of divorce reform.
But wait. Check those calculators and crunch those numbers again. On average, the post-divorce picture isn't as bleak as Weitzman claimed. After she gave her data to the Murray Research Center at Radcliffe College, another researcher, Richard Peterson, reanalyzed them using the same methods. But what a difference in results.
Dr. Peterson found that women's standard of living declined 27 percent, not 73 percent, while men's increased by 10 percent, not 42 percent. His findings, to be published in June in the American Sociological Review, are in line with other national studies on the issue.
In a written response to Peterson's forthcoming article, Weitzman states that the files she gave to the Murray Center were "seriously flawed." She concedes that "it is likely that the gender gap is less than I reported."
Peterson, a program officer at the Social Science Research Center in New York, does not minimize the significance of a 27 percent decline for women. Nor, he says, should anyone ignore the gender gap in the outcomes of divorce. But he points out that "the discussion of no-fault divorce and other legal reforms has been seriously distorted by Weitzman's inaccurately large estimates. To be effective, these reforms must be based on reliable data."
Even if Weitzman's data had been correct, a close examination of her study reveals warning signs. Her sample was only 228 individuals, all of whom received a divorce in Los Angeles. Somehow they came to represent all divorced couples in the nation.
As her figures continue to be quoted, their long shelf life illustrates the power of numbers to sway attitudes - and, in this case, even to influence legal proceedings and legislation. They show what can happen when everyone too willingly accepts whatever numbers are served up as the statistics du jour.
And how those statistics continue to multiply! Does any other country even come close to matching the American fascination with studies, surveys, and polls? From serious academic studies to marketing surveys to polls measuring happiness or job satisfaction, the national hunger for collective self-knowledge knows no limit.
So eager are we to have our opinions counted that we allow market researchers to interrupt dinner while we answer questions about everything from our preferences in automobiles to our attitudes about marriage. We cast a secret ballot on election day, then eagerly tell a pollster how we voted. Lately we even pay for 900-number calls to register our views on a particular subject. When the answers are tallied, broken down into percentages, pie charts, and graphs, we accept the results, right down to the decimal point.
Yet anyone who has ever taken part in a survey knows how hasty the process can be. A researcher lobs questions, we shoot back answers. Ask us next week and we might have different responses.
Who are we? What do we think? What do we want? The questions are valid, and studies and surveys can be useful in supplying certain kinds of answers to them. But as Weitzman's unfortunate errors show, there's value in sprinkling each new serving of statistics with a few grains of skepticism.