Editor's Blog
US President John F. Kennedy (r.) met with Soviet Deputy Premier Anas tas Nikoyan (fa r left) at the white house on Nov. 29, 1962, one month after the cuban missile crisis. (AP/File)
The Cuba crisis and the illusion of control
History is like archaeology. What humans live through in the present – economic struggles, political contests, national crises – drifts to the ground and slowly gets buried under the strata of more recent events. Then one day we excavate and wonder: What were our predecessors thinking?
When Graham Allison reviews October 1962 with people under the age of 50, their usual reaction is dropped-jaw amazement. How, they ask, could leaders have felt so boxed in that they seriously entertained the idea of nuclear war?
“It just seems incredible to people who didn’t live through those times,” says Dr. Allison, director of Harvard University’s Belfer Center for Science and International Affairs. “They can’t believe that there was a real chance of a war that would leave hundreds of millions of people dead.”
The Cuban missile crisis, which unfolded 50 years ago this week, was the closest the human race has come to nuclear holocaust. It is often explained as an eyeball-to-eyeball confrontation between John F. Kennedy and Nikita Khrushchev. (A good place to explore the crisis is at www.cubanmissilecrisis.org or in Allison’s book “Essence of Decision: Explaining the Cuban Missile Crisis.”) To Allison, who has spent a career examining strategic decisionmaking, the crisis was a turning point in understanding that even the sharpest minds and best teams of advisers are prone to misperception and miscalculation.
US leaders didn’t know, for instance, that Mr. Khrushchev was under pressure at home from hard-liners when he sent short-range missiles to Cuba. The missiles were meant to be hidden, but Russian technicians had failed to camouflage them. The United States didn’t know that tactical nuclear weapons were already on the island and that Soviet commanders were authorized to use them if US troops landed. There were dozens of other factors – from the inability of Moscow to control Cuba and its leader, Fidel Castro, to domestic political calculations in the US, where Kennedy was trying to live down the Bay of Pigs fiasco.
As Allison noted when we talked recently, the Cuban missile crisis showed that “we can unloose processes that we no longer can control.” No single leader can master all the complexities or manage all the players, especially during a fast-moving conflict – especially when ultimatums are issued. The world scared itself straight in 1962. Out of the Cuban crisis came a hot line connecting Washington and Moscow, the Nuclear Test Ban Treaty, the Non-Proliferation Treaty, several generations of strategic arms reduction treaties, and healthy skepticism about military capability, security strategy, and the trustworthiness of intelligence. We are always having to relearn those lessons (see Vietnam, Iraq, and the Soviet and American interventions in Afghanistan).
The Iranian nuclear issue, Allison says, is “the Cuban missile crisis in slow motion.” Attack or acquiesce look like the only options right now. But creativity, compromise, and face-saving might lead to other solutions. Iran, Allison says, might win the right to enrich uranium in exchange for full transparency and clear verification that it is not moving toward nuclear weapons.
History shows that we can bury ourselves by misperception and miscalculation. History also shows we can dig ourselves out.
A woman in the far-eastern port of Vladivostok, Russia, walks by a soviet-era monument. ( Vincent Yu/AP)
Election 2012: Choose a future, any future
The genre known as alternative history asks “what if?” What if the Spanish Armada had sunk the English fleet? What if the Confederacy had won the American Civil War? If JFK had not been assassinated? If Hitler had?
In the hands of a skilled strategist or storyteller, “alt history” is more than just a parlor game. It can show the far-reaching consequences of small events, making us appreciate our own time or lament what might have been. You can find alt history in everything from military analysis to thrillers like Robert Harris’s 1992 novel, “Fatherland,” to science fiction like “The Terminator,” “Back to the Future,” or the current time-bender, “Looper.” Alter one or two events in the past, the formula goes, and the present becomes a very different place.
Washington political reporter Linda Feldmann explores two distinct futures that could branch from the Nov. 6 US presidential election. (You can read them here and here.) A second term for President Obama or a first term for former Gov. Mitt Romney would start with unique advantages and face unique challenges. But then things get interesting.
International crises could suddenly rise up – bad ones, as in the 9/11 terror attacks; good ones, as in the 1989 collapse of communism – forcing a president to improvise. A president’s personal style also plays a part. As Gail Russell Chaddock notes in a companion piece (page 29), Jimmy Carter failed to establish rapport with congressional leaders and achieved little domestically, despite a Democrat-controlled House and Senate. On the other hand, inveterate cold warrior Ronald Reagan found himself face to face with a genial reformer in Mikhail Gorbachev, which led to a historic thaw in US-Soviet relations. So count on this: The next four years will look nothing like what we imagine.
Opinion polls indicate the 2012 presidential race could be as close as the 2000 race. An amusing alt-history essay in Newsweek not long ago described what might have happened if 2000 had gone the other way: A falling-out might have occurred between President Al Gore and his mavericky No. 2, Joe Lieberman, resulting in Sen. Hillary Rodham Clinton stepping in as vice president. And while we’re at it, Mr. Gore could have named Bill Clinton as secretary of State. Meanwhile, the Supreme Court vacancy left by Sandra Day O’Connor could have gone to a constitutional scholar and Illinois junior senator named Barack Obama. But for a few hanging chads, then, the 2008 race might have pitted Hillary Clinton and (get ready, what follows is an even bigger leap) running mate Bill Clinton against a resurgent George W. Bush and brother Jeb.
Sure, it’s parlor game nonsense – but only because we know what the present looks like. Decisions we make every moment – big ones like where to invest time or money, small ones like whether to return a phone call – affect the future. But we never know how. The cold war might have sped to a conclusion anyway in a second Jimmy Carter term. Spanish-ruled England might have reasserted its independence a few years after the Armada landed (perhaps while retaining the best paella recipes). JFK’s second term might have been mired in Vietnam.
The road ahead is always diverging. Way always leads on to way. It’s important to ask “what if?” at every fork. And it’s probably best to time travel with an open heart and wary eye.
John Yemma is editor of the Monitor.
Jay Hiner creates giant bubbles with a homemade wand and a nontoxic solution at Cheney Lake in east Anchorage, Alaska. ( Erik Hill/The Anchorage Daily News/AP)
How poor is poor? How rich is rich?
What defines a poor person?
The US Census Bureau, federal agencies, state governments, the United Nations, and economists all set different numbers. But the poverty line is as individual as the people it defines. Circumstances vary, geography is a factor, family and community play a role, and everybody makes choices.
To be poor in Central Asia or western Africa is not the same as being poor in London or Appalachia. In some cases it is better, and in some cases it is worse. Living simply can make a person look poor to a statistician, but is that real poverty? (See Jina Moore's excellent Monitor Weekly cover story -- click here -- unpacking poverty.)
You could ask the same questions about wealth as as about poverty. In Tom Wolfe’s satire on 1980s-era New York, “Bonfire of the Vanities,” the protagonist runs through his budget and shows how a $1 million salary is not enough to support his lifestyle. By most of the world’s standards, this “master of the universe” is clearly wealthy. Outside his Manhattan cocoon, he would be rich. But he feels poor.
He suffers from a problem that money can’t solve: poverty of spirit. In other words, he is unhappy. So here’s a corollary to our cover-story question: What defines a happy person? It’s one thing to achieve basic needs, another to feel comfortable, but how much money is needed to feel happy?
You probably have a common-sense view that echoes these truisms: We all need money. But money isn’t everything. Would it surprise you to learn that social scientists have actually proved those truisms true?
According to a 2008-09 study of 450,000 Americans by researchers at Princeton University, more money doesn’t just help the poor live better lives; it helps them feel better about life. “The pain of life’s misfortunes, including disease, divorce, and being alone, is exacerbated by poverty,” the authors write.
Increased incomes improves the conditions of the poor. But only up to a point. Above $75,000, money does not produce commensurate happiness. Chasing higher and higher income actually decreases your quality of life.
That’s because the quest for money and material comforts appears to shut off other forms of enrichment – family, friendships, hobbies, intellectual and spiritual pursuits, appreciation of nature. “The price of anything,” that guru of simplicity, Henry David Thoreau, wrote, “is the amount of life you exchange for it.”
The diminishing returns of wealth don’t just affect individ-uals. A 2010 report published in the Proceedings of the National Academy of Sciences found that, over the long run, happiness does not increase as a country’s overall income increases. In Greece and other economically hard-hit countries, unhappiness has soared – to the point of rioting – as income has plummeted. But examined over a period of 10 years or more, a nation’s gross domestic happiness is independent of rising income.
There’s still more happiness research detailed in a soon-to-be-published book titled “Happy Money: The Science of Spending,” which indicates that more money makes you happy only if you use it to buy yourself time or experiences or spend it on others.
So here’s the takeaway from our social scientists: Poverty is bad. Breaking people out of it is enormously important. But poverty is also a state of mind. As is affluence. More money makes people feel better, but only up to a point. Real happiness is tied to appreciation, to deeper pursuits, and to helping others.
Or you could say: Money isn’t nothing. Nor is it everything.
But you already knew that.
A mother helps her daughter read the Quran in Londonderry, N.H. (Melanie Stetson Freeman/Staff/File)
Reading the Quran in a new way
Carl Ernst has read, parsed, and puzzled over the Quran since graduate school in 1975. As in the Bible, some passages are mild, some blistering. Later ones appear to cancel out earlier ones. Which has precedence?
Now a specialist in Islamic studies at the University of North Carolina at Chapel Hill, Professor Ernst had an epiphany when he encountered an ancient literary technique known as “ring composition.”
We read books first page to last. But before cover-to-cover reads, there were scrolls, and before scrolls, there was oral storytelling. Many older works, Ernst learned from a scholar of Hindi-language Sufi texts, were not composed in a straight-line manner. Instead, the first line of a passage would be mirrored by the last line, the second by the second to last, and so on. At the center of the passage was where the key statement sat.
Why would anyone compose a story that way? In oral storytelling, Ernst says, people had to memorize huge amounts of material. They used mnemonic devices. A famous one is the “memory palace,” in which a storyteller mentally walks through a palace, each room helping him recall part of the story. That could have influenced where the most important spot would be – perhaps in the palace’s center.
In early written literature, scrolls were common. The ends of a scroll roll up. The center is the sweet spot. So ring composition was natural in the prebook era. Parts of “The Iliad” and parts of the Bible (Leviticus, in particular) appear to use this structure.
A few years ago, Ernst began looking for ring composition in the Quran. “That was my eureka moment,” he says.
Take Sura 60. Verses at the beginning and end deal with Abraham’s battle with idol worshipers. But here’s the center: “Perhaps it may be possible for God to create affection between you and your enemies.” That seems to call for tolerance and mercy.
Sura 5 also contains a surprise. At its center: “For everyone, We have established a law and a way. If God had wished He would have made you a single community. But this was so He might test you regarding what He sent you. So try to be first in doing what is best.” That seems to endorse religious pluralism.
“This is not an illusion,” Ernst says. “The same words or related words appear at the beginning and end of the suras.”
Ernst’s 2011 book “How to Read the Qur’an” explores the intriguing idea that ring composition, common in Muhammad’s day, can shed light on a book revered by more than 1 billion people and at the center of one of humanity’s most troubling conflicts. Hearts and minds won’t change overnight. But the Quran may eventually be viewed very differently.
Students in traditional garb ate breakfast before performing at a celebration of India’s independence in Bangalore, India, last month. (Aijaz Rahi/AP)
Balancing food, weather, and population
Cyclical or secular? That’s the question economists, historians, climatologists, farmers, consumers – just about anyone with an interest in the future, which is more or less everyone – are trying to answer.
During bad times, the idea of cyclicality is encouraging. We can ride out hardship because prosperity is just around the corner – although we also can’t relax when things are looking up because the economy is sure to head south again.
A secular change, on the other hand, means we’ve entered a new era, which is swell if that era is prosperous and plentiful – the two-decade “great moderation” that started in 1985, for instance. But secular change can also mean we get locked into sluggishness and scarcity as far as the eye can see. That’s the worry that has accompanied the Great Recession that began in 2007 and persists in many sectors of the world economy.
The drought that has gripped the agricultural heartland of the United States, Russia, Australia, India, and other food-producing regions of the world in 2012 (see this current Monitor cover story) has a cyclical/secular dimension. If the climate has changed, drought could be the new normal, with big implications for consumers, especially in poor countries. But parched conditions could also just be a bad patch of weather similar to the great droughts of the 1930s, early 1950s, and late 1980s. Tree-ring data indicate droughts even more severe than those in the 1930s occurred in pre-Columbian North America.
If that seems cyclical, there’s still a secular dimension. The 21st-century combination of global population and global trade is unprecedented. Never before have 7 billion people lived on this planet (with 2 billion more on the way by 2050). Never before have far-flung markets been so interconnected.
If droughts merely come and go, feeding the burgeoning world population would be difficult enough. If droughts are a more permanent condition now because the climate is growing warmer, feeding the world will require the best and brightest in agriculture and resource management.
You may not recall the drought of 1988. There was plenty of other news that year – a US presidential election; the start of anticommunist revolutions in Eastern Europe; a devastating earthquake in Armenia; the explosion of Pan Am Flight 103 over Lockerbie, Scotland. But the ’88 drought at one point covered 45 percent of the US, and until hurricane Katrina it was the costliest natural disaster in US history. A study commissioned by Oxfam indicates that if an ’88-scale drought recurred in 2030, poorer countries that import corn and wheat would face a shock so severe that famine and social unrest would be the result.
A sharp rise in food prices in 2007-08 roiled populations from Mexico to Sri Lanka and helped set the stage for today’s Middle East upheaval. So far, the drought of 2012 has not caused panic, largely because governments from Egypt to India warehoused foodstuffs for just such a contingency.
Prudence is important even if Earth’s weather isn’t undergoing secular change. Rains come and go. Years of lean follow years of plenty. But feeding 9 billion people by midcentury is more than a cyclical challenge. It will require levels of innovation and co-operation never before seen in human history.
Samantha Buckley of Washington, Ill., is swept off her feet by Oscar the Robot at the Heart of Illinois Fair in Peoria. (Ron Johnson/Peoria Journal Star/AP)
You can call me "A.I."
“Internet” is a workmanlike name for the 50-year-old nervous system of packet switches, servers, and routers that spans the globe; is commonplace in homes, at work, in cars; and absorbs every moment of every smart-phone owner in line at every bus station or coffee shop.
“World Wide Web” is a friendlier term. But it’s essentially the same idea – a phrase that indicates the far-flung threads spun of communications technology. But what’s the name of the result of all the human business that occurs on the Internet, the cumulative effect of quadrillion bits of data being processed, and the prolific harvest of ideas, notions, relationships, associations, riffs, and nonsense that pour out of this wonder of technology? Music, like the Internet, is a technology. “The Marriage of Figaro” is what Mozart named one magnificent result.
In biology, we give intelligent creatures generic names: dust mite, for instance, or humpback whale. Those life-forms we become more familiar with get unique designations, sort of like URLs: Albert Einstein; Cousin Louie, who is likely to say anything at a family dinner; Molly, the terrier who loves to play ball.
As the Web becomes denser and faster year by year, futurists believe there will be a point where it, too, will seem to exhibit unique intelligence. Already, as Greg Lamb notes in a Monitor cover story (click here to read it), IBM’s Watson supercomputer, by tapping the infosphere at hypersonic speed and besting the reigning “Jeopardy!” game show champions, has come close to passing the “Turing test” in which it seems indistinguishable from human intelligence.
When the Mars Curiosity rover follows its advanced programming, makes last-minute adjustments on its own, and lands flawlessly on the fourth planet’s surface; when Google’s autonomous cars navigate California’s highways; or when Apple’s Siri seems to be listening to us and responding with useful information (some of the time, at least) – there’s intelligence at work that is at least as impressive as a dust mite or terrier.
Some scientists refer to the coming age as “transhuman.” More dystopian observers describe the Internet as a “global brain” or “hive mind” and imagine human-machine “cyborgs.” But why be so ominous? When humans act together, we call ourselves “the people,” as in “We, the people” or “The people have spoken.” When we think together via the Internet, that’s us, the people, too.
For now, we’re calling advanced information technology artificial intelligence, or AI.
AI has a long history rooted in high levels of logic. As computational power has exploded, the brute force of all that data processing has run rings around the elegant logic trees envisioned by AI pioneers like Marvin Minsky and John McCarthy. The term AI lost its original meaning. Technologists appropriated it. It is artificial because it is human-made. And it increasingly shows signs of intelligence.
No matter how much we rely on and learn from AI, however, it cannot answer the biggest question: Where did intelligence come from? As the biblical Job was asked, “Who has put wisdom in the inward parts? or who has given understanding to the heart?”
Technological achievements are breathtaking. But the original breath each of us took was no human accomplishment. A higher and more profound Intelligence created intelligence.
John Yemma is editor of The Monitor.
14-year-old Raihana serves lunch at her home near Kabul, Afghanistan. she and her family are forced to work at a brick factory to repay a $900 debt. (Rahmat Gul/AP)
The many forms of exploitation
Millions of people worldwide are trapped in what amounts to modern-day slavery.
Shackles and whips are seldom used, but individuals desperate for money are lured into jobs in which they work for meager wages, live under the threat of violence, are subject to sexual abuse, and constantly fear arrest or deportation.
Of all the ways that workers are exploited, the most poignant stories tend to be those of women caught up in sex trafficking. A recent report by the Britain-based Anti-Slavery International group, for instance, notes that many women leave impoverished homes “with dreams in their eyes, fear and excitement in their minds at what awaits,” but too often are duped into a life of prostitution, where they are intimidated, abused, and blackmailed. A woman who manages to break free might still be charged with prostitution or illegal immigration. If she returns home, she faces shame. In some cultures, she faces death.
Stephanie Hanes’s Monitor cover story examines this difficult subject. She makes clear that forced prostitution must be combatted for the crime that it is. But because of the age-old human fascination with sex, it is all too easy to focus on that problem and overlook the many other ways that women, children, and men – as many as 27 million, by some estimates – are exploited in global industries built on cheap, often forced, labor. Everything from the produce we eat to the mineral components in our cellphones, from the clothes we wear to the unskilled workers cleaning our buildings, may be part of this system.
Forced prostitution is one of many forms of exploitation. And even it is nuanced. Sometimes the force is overt. Sometimes it is psychological. And sometimes prostitution is a choice.
Please understand: Pointing this out is not meant to minimize the problem. The United Nations estimates that at any one time as many as 2 million sex workers are under coercion. Celebrities, activists, religious officials, politicians, and concerned citizens rightly decry the practice. But the compelling nature of the problem of sexual exploitation often diverts attention and resources from other forms of forced labor.
What can any of us do?
For one thing, we can be alert to situations in which women, men, and children are living in the shadows. We can also be more conscious of choices we make that feed the demand side of human trafficking. Cheap goods are great, but what are the conditions of the workers who made them? You can get an idea of how what we buy affects human trafficking by clicking here. There's also a link to a website that can help you understand what steps you can take to lessen that effect.
* * *
This week, our Commentary team launches a series of debates on election-year issues. On page 35 are the pros and cons of marijuana legalization. We’ll also explore affirmative action, immigration, voter ID laws, foreign policy, health care, job creation, and the federal budget. You may agree with one side or the other. But just in case you don’t see the world only in black and white, we are also offering an essay that explores a middle way.
What’s a middle way? It’s a concept much maligned in our polarized public sphere: compromise, give and take, an acknowledgment that one side doesn’t have all answers.
A word about comments on CSMonitor.com
To our readers:
We've adjusted our policy for comments on our articles.
In the past, most articles on our website allowed readers to comment. A few -- on subjects that experience had shown were not bringing out the best in some commenters -- did not have that option.
As of today, we've shifted so that we do not take comments on our articles except when a blogger or writer specifically allows them. In other words, comments will be the exception now, not the default option.
We've made this change after extensive analysis of the comments our articles have received over the past two years. Some have been thoughtful. Some have added useful information or pointed out our mistakes. Thank you for those. But many comments have been non-productive.
You can still reach us to tell us you like or dislike an article, to give us a news tip or story suggestion, and/or to correct the record. In most of our articles, you can click the author's byline and follow the prompts to email that person. Or you can comment by going to the "Contact Us" link at the bottom of every page (be sure to refer to the article you are commenting on). And many of our staff-written articles are posted on our Facebook page, where comments are always welcome.
Like most things on the Internet, this change is not necessarily permanent. We value our readers and want CSMonitor.com to dignify their intelligence, empathy, and civic-spiritedness. We will be looking for new ways to support and engage those qualities .
-- John Yemma, Editor
p.s. -- By the way, I'm enabling comments on this post. Feel free jump in.
Chester Barker has been giving buzz cuts at Barker’s Barber Shop in Pittsboro, N.C., for 56 years. (Ann Hermes/Staff)
Why we work -- and keep working
Like a lot of young people, I found my first indoor job at a supermarket. There were probably 35 employees at the local Handy Andy, many the same age as me (16). Pay was $1.10 an hour. I could scarcely believe that at the end of my first week I had mastered basic bagging and shelf stocking, met a new set of friends (did I mention I was 16?), and had cash in my wallet.
After several decades and a dozen or so jobs, I’d describe my work life as having been interesting and rewarding, a place where friendships have been forged and skills acquired – even if some days have dragged on and been not altogether pleasant. Does that sound familiar? For most people, work not only occupies the bulk of our days, it practically defines us.
Work is the difference we make over a lifetime. Each of us accumulates a body of work that is more than bullet points on our résumé. Our work includes what we contribute in the home, in the development of our talents, in the refinement of intellect and growth of character. Work is about improving ourselves and helping make the world a little better as a partner, parent, friend, or citizen. Plus, there’s that paycheck.
Is it any wonder then that, as Mark Trumbull’s Monitor Weekly cover story on the rise of the “silver-collar” workforce details, many people are rethinking the convention of laboring from their 20s to their 60s and then abruptly breaking off to spend the rest of their days lolling in a hammock or puttering in a garden?
Much of the rethinking is born out of necessity. The values of 401(k)s and family homes have taken a huge hit in recent years. Pensions are disappearing. Social Security and Medicare are not the sure things they once were. And for people concerned about the cost of assisted living and hospital visits, fixed income can be a risky financial plan.
Now take all of those factors and multiply by 79 million – the number of baby boomers heading into retirement over the next two decades. As Alicia Munnell, who directs the Center for Retirement Research at Boston College, puts it: “This is a new level, and we will be staying here. The United States, like many other developed countries, will have a high ratio of retirees to workers.”
This situation can be ameliorated by people staying in the labor force longer. More workers per retiree will increase national output, reduce the burden on the young, and increase retirement security of the old. And Professor Munnell notes, “As a human being, we know that work makes life easier. It gives structure, a social environment, friends – it has a lot of positive aspects.”
Mark explores those factors, paying close attention to what older workers see as the positive aspects of remaining on the job – the desire to contribute, to stay connected, to coach. There are a lot of “to be sures” to this trend. Older workers sometimes are seen as blocking younger ones. Older workers may not be as adaptable or energetic as they once were. Older workers often need flexible hours and special equipment.
While there are many reasons for the silver-collar rise, the underlying one is probably this: The hammock may be tempting, but deep down we don’t want to stop making a difference. That’s why we work. Plus, there’s that paycheck.
* * *
Note to readers: Our tech team recently discovered that the e-mail address editor@csmonitor.com has been malfunctioning. Several months of e-mails have vanished. It’s fixed now. If you’ve sent a note – whether a cheer or jeer – that got no response, it would be great if you could resend.
The US flag sits in a pocket at the 2008 Republican national convention in st. Paul, Minn. (Mary Knox Merrill/The Christian Science Monitor/File)
Convention watch: The speech's the thing
If you have spent time inside a modern national political convention, you know that they are noisy, anachronistic affairs populated mostly by minor-league politicians, campaign advisers, journalists, and the same pundits you see every night on TV – all of whom mill around for most of the day and come alive for a few prime-time speeches.
So what’s the purpose of a convention these days? We asked Robert Lehrman to take on that question (you can read his observations here).
Conventions began as a reform movement, designed to break the backroom dealmaking in Washington that determined candidates for president in the early 19th century. Instead of congressional insiders deciding, delegates from around the country would gather to choose a standard-bearer. It was a radical idea at the time.
Over the next 150 years, conventions developed into sprawling, backslapping reunions with balloon drops, brass bands, straw hats, and their own forms of backroom dealmaking. By the mid-20th century, they were no longer the solution but the problem. The low point came in 1968 with public outrage at the mayhem inside and outside Chicago’s International Amphitheatre during the Democratic National Convention.
A new reform movement in the 1970s turned primaries and caucuses into the vehicle for selecting presidential candidates. So it’s worth repeating: What’s the purpose of a convention? Well, for one thing, you could see them as showcases for political speech.
Political speech is an ancient rhetorical form that can seem fusty in the age of Twitter, texting, and TED talks. But from Pericles to Winston Churchill, Henry V to Nelson Mandela, a well-crafted speech has been able to stir hearts, rally support, and even bend the curve of history. Words delivered in just the right way at just the right moment can become a kind of secular hymn that we keep humming to ourselves long after the applause has faded. Here are a few politicians’ turns of phrase that have stayed with me over the years:
“A settler pushes west and sings his song, and the song echoes out forever and fills the unknowing air. It is the American sound: It is hopeful, big-hearted, idealistic – daring, decent, and fair.” (Ronald
Reagan, 1985)
“Hope in the face of difficulty, hope in the face of uncertainty, the audacity of hope: In the end, that is God’s greatest gift to us, the bedrock of this nation, a belief in things not seen, a belief that there are better days ahead.” (Barack Obama, 2004)
“America is never wholly herself unless she is engaged in high moral principle. We as a people have such a purpose today. It is to make kinder the face of the nation and gentler the face of the world.” (George H.W. Bush, 1989)
Conventions are of fading importance in the mechanics of electoral politics. But if you appreciate the power of political speech, a convention can at least serve this purpose: It is where you can hear words that ennoble us and ideals that, if only for a moment, stir our hope for the future.



Previous




Become part of the Monitor community