Carl Ernst has read, parsed, and puzzled over the Quran since graduate school in 1975. As in the Bible, some passages are mild, some blistering. Later ones appear to cancel out earlier ones. Which has precedence?
Now a specialist in Islamic studies at the University of North Carolina at Chapel Hill, Professor Ernst had an epiphany when he encountered an ancient literary technique known as “ring composition.”
We read books first page to last. But before cover-to-cover reads, there were scrolls, and before scrolls, there was oral storytelling. Many older works, Ernst learned from a scholar of Hindi-language Sufi texts, were not composed in a straight-line manner. Instead, the first line of a passage would be mirrored by the last line, the second by the second to last, and so on. At the center of the passage was where the key statement sat.
Why would anyone compose a story that way? In oral storytelling, Ernst says, people had to memorize huge amounts of material. They used mnemonic devices. A famous one is the “memory palace,” in which a storyteller mentally walks through a palace, each room helping him recall part of the story. That could have influenced where the most important spot would be – perhaps in the palace’s center.
In early written literature, scrolls were common. The ends of a scroll roll up. The center is the sweet spot. So ring composition was natural in the prebook era. Parts of “The Iliad” and parts of the Bible (Leviticus, in particular) appear to use this structure.
A few years ago, Ernst began looking for ring composition in the Quran. “That was my eureka moment,” he says.
Take Sura 60. Verses at the beginning and end deal with Abraham’s battle with idol worshipers. But here’s the center: “Perhaps it may be possible for God to create affection between you and your enemies.” That seems to call for tolerance and mercy.
Sura 5 also contains a surprise. At its center: “For everyone, We have established a law and a way. If God had wished He would have made you a single community. But this was so He might test you regarding what He sent you. So try to be first in doing what is best.” That seems to endorse religious pluralism.
“This is not an illusion,” Ernst says. “The same words or related words appear at the beginning and end of the suras.”
Ernst’s 2011 book “How to Read the Qur’an” explores the intriguing idea that ring composition, common in Muhammad’s day, can shed light on a book revered by more than 1 billion people and at the center of one of humanity’s most troubling conflicts. Hearts and minds won’t change overnight. But the Quran may eventually be viewed very differently.
Cyclical or secular? That’s the question economists, historians, climatologists, farmers, consumers – just about anyone with an interest in the future, which is more or less everyone – are trying to answer.
During bad times, the idea of cyclicality is encouraging. We can ride out hardship because prosperity is just around the corner – although we also can’t relax when things are looking up because the economy is sure to head south again.
A secular change, on the other hand, means we’ve entered a new era, which is swell if that era is prosperous and plentiful – the two-decade “great moderation” that started in 1985, for instance. But secular change can also mean we get locked into sluggishness and scarcity as far as the eye can see. That’s the worry that has accompanied the Great Recession that began in 2007 and persists in many sectors of the world economy.
The drought that has gripped the agricultural heartland of the United States, Russia, Australia, India, and other food-producing regions of the world in 2012 (see this current Monitor cover story) has a cyclical/secular dimension. If the climate has changed, drought could be the new normal, with big implications for consumers, especially in poor countries. But parched conditions could also just be a bad patch of weather similar to the great droughts of the 1930s, early 1950s, and late 1980s. Tree-ring data indicate droughts even more severe than those in the 1930s occurred in pre-Columbian North America.
If that seems cyclical, there’s still a secular dimension. The 21st-century combination of global population and global trade is unprecedented. Never before have 7 billion people lived on this planet (with 2 billion more on the way by 2050). Never before have far-flung markets been so interconnected.
If droughts merely come and go, feeding the burgeoning world population would be difficult enough. If droughts are a more permanent condition now because the climate is growing warmer, feeding the world will require the best and brightest in agriculture and resource management.
You may not recall the drought of 1988. There was plenty of other news that year – a US presidential election; the start of anticommunist revolutions in Eastern Europe; a devastating earthquake in Armenia; the explosion of Pan Am Flight 103 over Lockerbie, Scotland. But the ’88 drought at one point covered 45 percent of the US, and until hurricane Katrina it was the costliest natural disaster in US history. A study commissioned by Oxfam indicates that if an ’88-scale drought recurred in 2030, poorer countries that import corn and wheat would face a shock so severe that famine and social unrest would be the result.
A sharp rise in food prices in 2007-08 roiled populations from Mexico to Sri Lanka and helped set the stage for today’s Middle East upheaval. So far, the drought of 2012 has not caused panic, largely because governments from Egypt to India warehoused foodstuffs for just such a contingency.
Prudence is important even if Earth’s weather isn’t undergoing secular change. Rains come and go. Years of lean follow years of plenty. But feeding 9 billion people by midcentury is more than a cyclical challenge. It will require levels of innovation and co-operation never before seen in human history.
“Internet” is a workmanlike name for the 50-year-old nervous system of packet switches, servers, and routers that spans the globe; is commonplace in homes, at work, in cars; and absorbs every moment of every smart-phone owner in line at every bus station or coffee shop.
“World Wide Web” is a friendlier term. But it’s essentially the same idea – a phrase that indicates the far-flung threads spun of communications technology. But what’s the name of the result of all the human business that occurs on the Internet, the cumulative effect of quadrillion bits of data being processed, and the prolific harvest of ideas, notions, relationships, associations, riffs, and nonsense that pour out of this wonder of technology? Music, like the Internet, is a technology. “The Marriage of Figaro” is what Mozart named one magnificent result.
In biology, we give intelligent creatures generic names: dust mite, for instance, or humpback whale. Those life-forms we become more familiar with get unique designations, sort of like URLs: Albert Einstein; Cousin Louie, who is likely to say anything at a family dinner; Molly, the terrier who loves to play ball.
As the Web becomes denser and faster year by year, futurists believe there will be a point where it, too, will seem to exhibit unique intelligence. Already, as Greg Lamb notes in a Monitor cover story (click here to read it), IBM’s Watson supercomputer, by tapping the infosphere at hypersonic speed and besting the reigning “Jeopardy!” game show champions, has come close to passing the “Turing test” in which it seems indistinguishable from human intelligence.
When the Mars Curiosity rover follows its advanced programming, makes last-minute adjustments on its own, and lands flawlessly on the fourth planet’s surface; when Google’s autonomous cars navigate California’s highways; or when Apple’s Siri seems to be listening to us and responding with useful information (some of the time, at least) – there’s intelligence at work that is at least as impressive as a dust mite or terrier.
Some scientists refer to the coming age as “transhuman.” More dystopian observers describe the Internet as a “global brain” or “hive mind” and imagine human-machine “cyborgs.” But why be so ominous? When humans act together, we call ourselves “the people,” as in “We, the people” or “The people have spoken.” When we think together via the Internet, that’s us, the people, too.
For now, we’re calling advanced information technology artificial intelligence, or AI.
AI has a long history rooted in high levels of logic. As computational power has exploded, the brute force of all that data processing has run rings around the elegant logic trees envisioned by AI pioneers like Marvin Minsky and John McCarthy. The term AI lost its original meaning. Technologists appropriated it. It is artificial because it is human-made. And it increasingly shows signs of intelligence.
No matter how much we rely on and learn from AI, however, it cannot answer the biggest question: Where did intelligence come from? As the biblical Job was asked, “Who has put wisdom in the inward parts? or who has given understanding to the heart?”
Technological achievements are breathtaking. But the original breath each of us took was no human accomplishment. A higher and more profound Intelligence created intelligence.
John Yemma is editor of The Monitor.
Millions of people worldwide are trapped in what amounts to modern-day slavery.
Shackles and whips are seldom used, but individuals desperate for money are lured into jobs in which they work for meager wages, live under the threat of violence, are subject to sexual abuse, and constantly fear arrest or deportation.
Of all the ways that workers are exploited, the most poignant stories tend to be those of women caught up in sex trafficking. A recent report by the Britain-based Anti-Slavery International group, for instance, notes that many women leave impoverished homes “with dreams in their eyes, fear and excitement in their minds at what awaits,” but too often are duped into a life of prostitution, where they are intimidated, abused, and blackmailed. A woman who manages to break free might still be charged with prostitution or illegal immigration. If she returns home, she faces shame. In some cultures, she faces death.
Stephanie Hanes’s Monitor cover story examines this difficult subject. She makes clear that forced prostitution must be combatted for the crime that it is. But because of the age-old human fascination with sex, it is all too easy to focus on that problem and overlook the many other ways that women, children, and men – as many as 27 million, by some estimates – are exploited in global industries built on cheap, often forced, labor. Everything from the produce we eat to the mineral components in our cellphones, from the clothes we wear to the unskilled workers cleaning our buildings, may be part of this system.
Forced prostitution is one of many forms of exploitation. And even it is nuanced. Sometimes the force is overt. Sometimes it is psychological. And sometimes prostitution is a choice.
Please understand: Pointing this out is not meant to minimize the problem. The United Nations estimates that at any one time as many as 2 million sex workers are under coercion. Celebrities, activists, religious officials, politicians, and concerned citizens rightly decry the practice. But the compelling nature of the problem of sexual exploitation often diverts attention and resources from other forms of forced labor.
What can any of us do?
For one thing, we can be alert to situations in which women, men, and children are living in the shadows. We can also be more conscious of choices we make that feed the demand side of human trafficking. Cheap goods are great, but what are the conditions of the workers who made them? You can get an idea of how what we buy affects human trafficking by clicking here. There's also a link to a website that can help you understand what steps you can take to lessen that effect.
* * *
This week, our Commentary team launches a series of debates on election-year issues. On page 35 are the pros and cons of marijuana legalization. We’ll also explore affirmative action, immigration, voter ID laws, foreign policy, health care, job creation, and the federal budget. You may agree with one side or the other. But just in case you don’t see the world only in black and white, we are also offering an essay that explores a middle way.
What’s a middle way? It’s a concept much maligned in our polarized public sphere: compromise, give and take, an acknowledgment that one side doesn’t have all answers.
To our readers:
We've adjusted our policy for comments on our articles.
In the past, most articles on our website allowed readers to comment. A few -- on subjects that experience had shown were not bringing out the best in some commenters -- did not have that option.
As of today, we've shifted so that we do not take comments on our articles except when a blogger or writer specifically allows them. In other words, comments will be the exception now, not the default option.
We've made this change after extensive analysis of the comments our articles have received over the past two years. Some have been thoughtful. Some have added useful information or pointed out our mistakes. Thank you for those. But many comments have been non-productive.
You can still reach us to tell us you like or dislike an article, to give us a news tip or story suggestion, and/or to correct the record. In most of our articles, you can click the author's byline and follow the prompts to email that person. Or you can comment by going to the "Contact Us" link at the bottom of every page (be sure to refer to the article you are commenting on). And many of our staff-written articles are posted on our Facebook page, where comments are always welcome.
Like most things on the Internet, this change is not necessarily permanent. We value our readers and want CSMonitor.com to dignify their intelligence, empathy, and civic-spiritedness. We will be looking for new ways to support and engage those qualities .
-- John Yemma, Editor
p.s. -- By the way, I'm enabling comments on this post. Feel free jump in.
Like a lot of young people, I found my first indoor job at a supermarket. There were probably 35 employees at the local Handy Andy, many the same age as me (16). Pay was $1.10 an hour. I could scarcely believe that at the end of my first week I had mastered basic bagging and shelf stocking, met a new set of friends (did I mention I was 16?), and had cash in my wallet.
After several decades and a dozen or so jobs, I’d describe my work life as having been interesting and rewarding, a place where friendships have been forged and skills acquired – even if some days have dragged on and been not altogether pleasant. Does that sound familiar? For most people, work not only occupies the bulk of our days, it practically defines us.
Work is the difference we make over a lifetime. Each of us accumulates a body of work that is more than bullet points on our résumé. Our work includes what we contribute in the home, in the development of our talents, in the refinement of intellect and growth of character. Work is about improving ourselves and helping make the world a little better as a partner, parent, friend, or citizen. Plus, there’s that paycheck.
Is it any wonder then that, as Mark Trumbull’s Monitor Weekly cover story on the rise of the “silver-collar” workforce details, many people are rethinking the convention of laboring from their 20s to their 60s and then abruptly breaking off to spend the rest of their days lolling in a hammock or puttering in a garden?
Much of the rethinking is born out of necessity. The values of 401(k)s and family homes have taken a huge hit in recent years. Pensions are disappearing. Social Security and Medicare are not the sure things they once were. And for people concerned about the cost of assisted living and hospital visits, fixed income can be a risky financial plan.
Now take all of those factors and multiply by 79 million – the number of baby boomers heading into retirement over the next two decades. As Alicia Munnell, who directs the Center for Retirement Research at Boston College, puts it: “This is a new level, and we will be staying here. The United States, like many other developed countries, will have a high ratio of retirees to workers.”
This situation can be ameliorated by people staying in the labor force longer. More workers per retiree will increase national output, reduce the burden on the young, and increase retirement security of the old. And Professor Munnell notes, “As a human being, we know that work makes life easier. It gives structure, a social environment, friends – it has a lot of positive aspects.”
Mark explores those factors, paying close attention to what older workers see as the positive aspects of remaining on the job – the desire to contribute, to stay connected, to coach. There are a lot of “to be sures” to this trend. Older workers sometimes are seen as blocking younger ones. Older workers may not be as adaptable or energetic as they once were. Older workers often need flexible hours and special equipment.
While there are many reasons for the silver-collar rise, the underlying one is probably this: The hammock may be tempting, but deep down we don’t want to stop making a difference. That’s why we work. Plus, there’s that paycheck.
* * *
Note to readers: Our tech team recently discovered that the e-mail address firstname.lastname@example.org has been malfunctioning. Several months of e-mails have vanished. It’s fixed now. If you’ve sent a note – whether a cheer or jeer – that got no response, it would be great if you could resend.
If you have spent time inside a modern national political convention, you know that they are noisy, anachronistic affairs populated mostly by minor-league politicians, campaign advisers, journalists, and the same pundits you see every night on TV – all of whom mill around for most of the day and come alive for a few prime-time speeches.
So what’s the purpose of a convention these days? We asked Robert Lehrman to take on that question (you can read his observations here).
Conventions began as a reform movement, designed to break the backroom dealmaking in Washington that determined candidates for president in the early 19th century. Instead of congressional insiders deciding, delegates from around the country would gather to choose a standard-bearer. It was a radical idea at the time.
Over the next 150 years, conventions developed into sprawling, backslapping reunions with balloon drops, brass bands, straw hats, and their own forms of backroom dealmaking. By the mid-20th century, they were no longer the solution but the problem. The low point came in 1968 with public outrage at the mayhem inside and outside Chicago’s International Amphitheatre during the Democratic National Convention.
A new reform movement in the 1970s turned primaries and caucuses into the vehicle for selecting presidential candidates. So it’s worth repeating: What’s the purpose of a convention? Well, for one thing, you could see them as showcases for political speech.
Political speech is an ancient rhetorical form that can seem fusty in the age of Twitter, texting, and TED talks. But from Pericles to Winston Churchill, Henry V to Nelson Mandela, a well-crafted speech has been able to stir hearts, rally support, and even bend the curve of history. Words delivered in just the right way at just the right moment can become a kind of secular hymn that we keep humming to ourselves long after the applause has faded. Here are a few politicians’ turns of phrase that have stayed with me over the years:
“A settler pushes west and sings his song, and the song echoes out forever and fills the unknowing air. It is the American sound: It is hopeful, big-hearted, idealistic – daring, decent, and fair.” (Ronald
“Hope in the face of difficulty, hope in the face of uncertainty, the audacity of hope: In the end, that is God’s greatest gift to us, the bedrock of this nation, a belief in things not seen, a belief that there are better days ahead.” (Barack Obama, 2004)
“America is never wholly herself unless she is engaged in high moral principle. We as a people have such a purpose today. It is to make kinder the face of the nation and gentler the face of the world.” (George H.W. Bush, 1989)
Conventions are of fading importance in the mechanics of electoral politics. But if you appreciate the power of political speech, a convention can at least serve this purpose: It is where you can hear words that ennoble us and ideals that, if only for a moment, stir our hope for the future.
Where does intelligence come from? Biologists look to organic structures. Psychologists study influences and experiences. Theologians look to the spiritual.
All can agree that intelligence needs care and feeding. For that we have teachers to thank – parents, an aunt or uncle who takes an interest in us, a religious guide or workplace mentor, a thoughtful friend.
That’s the informal network. The formal one is the subject of this week’s cover story: schoolteachers. There are more than 7 million teachers in the United States, and millions more abroad. Teachers, writes Monitor staffer Amanda Paulson in a Monitor cover story, are the most important factor in student learning – more important than textbooks, tests, computers, classrooms, peer groups, study guides, or any other aspect of the education industry.
Almost all of us have felt the embers of intelligence glow because of a teacher’s careful attention. For me – and for generations of 12th-graders at William B. Travis High School in Austin, Texas – Jane Smoot was that teacher. Her love of prose and poetry inspired students across four decades. Her enthusiasm when a young writer assembled a composition in a way that conveyed the essence of an idea made you want to do even better next time.
But what makes a good teacher? And how can we make more of them? Amanda seeks answers. In our age of metrics, it is tempting to impose a formula for teaching excellence. As we know from standardized testing of students, data can be useful in charting progress (or slippage) over time and ensuring that no student is left behind. But metrics cannot capture the quicksilver of excellent teaching. For that, more qualitative observation is needed. Amanda explores that complex and controversial process.
For extra credit in understanding where excellence comes from, I recently checked out a book called “Burned In: Fueling the Fire to Teach.” It is a series of short essays by teachers about what motivates them, with an emphasis on how to avoid the too-
common problem of burnout. In the first three years of their careers, half of all teachers get discouraged and quit. That’s an alarmingly high attrition rate. They feel dispirited, disrespected, or out of their depth. They feel impoverished, frustrated, swamped by paperwork and meetings. They usually feel all of those burdens simultaneously.
And yet, as Michael Dunn, a veteran high school English teacher, marvels, think of what they do: Each fall “there’s a whole new crop of human beings to grow, who never knew they could write a sonnet, or how effectively they are ‘played’ each day by advertisers, or how blessed they are to be able to question their government, or how passionate Emily Dickinson really was.…”
Rosetta Marantz Cohen, a professor of American studies and education at Smith College in Northampton, Mass., has studied teachers who have stayed happy and committed. They are sustained, she says, not just by the mission of nurturing young minds, or by the high calling of safeguarding civilization. Most often, she finds, they love their subject – literature, language, chemistry, math. They pursue their subject in and outside the classroom.
Students know when teachers teach what they love. We knew that about Miss Smoot. You probably have a Miss Smoot in your life, too. Thank that person for stirring the embers of your mind.
John Yemma is editor of the Monitor. You can reach him at email@example.com.
Some of the most celebrated TV shows of this era are also some of the most violent: “The Wire,” “Breaking Bad,” “The Sopranos.” It’s the same with movies – from “Pulp Fiction” to “Batman.” And video games. And books. Violence has a long and rich tradition in the arts as a plot accelerant and drama intensifier. Shakespeare employed it liberally. Homer, too.
But for as long as there have been arts, people have wondered what effect violent content has on behavior. Plato wanted to keep young people away from intense plays, which he felt skewed proper character development. His student Aristotle disagreed, arguing that drama had a cathartic effect, channeling off anger and aggression.
We’re a little closer today to settling the issue, which often arises after an act of mass violence such as the one in Aurora, Colo.
Common sense says make-believe images must have an effect. If the media were unable to sway thought, a hole the size of Gotham City would be blown in the advertising industry. There would be no reason to saturate the airwaves with political ads, sponsor sports spectacles, or launch big marketing campaigns.
A mountain of behavioral studies has been amassed in recent years showing that media messages are, indeed, persuasive. That goes not just for consumer choice. A 2006 review by L. Rowell Huesmann and Laramie D. Taylor for the University of Michigan’s Institute for Social Research pulled together dozens of careful studies. Conclusion: “Media violence poses a threat to public health inasmuch as it leads to an increase in real-world violence and aggression.” Every category of violent imagery had an effect: fictional TV and film, TV news, video games.
At the same time, recent neuroscience experiments appear to indicate that prolonged exposure to violent and aggressive imagery decreases inhibitions about actual violence and aggression. While all such research remains provisional, the preponderance of evidence does seem to side with Plato, not Aristotle.
Prof. James Klagge, who teaches philosophy at Virginia Tech, has thought a lot about this age-old debate. Even Plato, in his desire to edit and censor, he notes, couldn’t come up with a mild alternative for young people to experience. He endorsed “The Illiad” and “The Odyssey,” which are every bit as bloody and intense as anything Christopher Nolan or Quentin Tarantino would produce today (e.g., the grim story of Polyphemus, the cyclops).
There’s little, it seems, we can do to rid our lives of violent themes. Professor Klagge points out that anyone who has tried to raise boys in an environment free from war toys or playtime shootouts almost always sees these influences assert themselves anyway. And if you recall the Bible story of Cain and Abel, the first two boys on earth, you know that Cain didn’t go to the
movies or play Mortal Kombat to learn how to slay his brother.
So while a constant diet of violent images can’t be good for you, censorship doesn’t seem to work either. Which leaves – what?
How about moral education? The study and practice of doing what is right is not a silver bullet in a bullet-riddled age of entertainment. It is a long-term defense project – as old as religion and philosophy – designed to counter dark thoughts of violence and aggression and build a culture of character, honor, and respect.
John Yemma is editor of the Monitor.
The two rival strands of American environmentalism – nature untouched versus nature managed – can be traced back to John Muir and Gifford Pinchot.
Muir, founder of the Sierra Club, was a purist. Brought up in a strict religious household, he found spiritual uplift in wilderness, especially in the American West. The mountains and streams of the Sierra Nevada were his church; the forest was sacred. He wanted nature reserves left alone and believed the only resource humans should harvest from them was the restoration of the soul.
Pinchot, the first head of the US Forest Service, was pragmatic. The son of a wealthy developer of land and lumber, he saw forests and wild lands as assets to be exploited – albeit carefully and with consideration of the needs of future generations. Conservation, to him, was not about sequestration and prohibition. It was husbandry on a grand scale.
Let’s be honest. It is impossible to choose either philosophy exclusively. A cathedral of pines is at least as magnificent as Notre Dame. No skyscraper can compare to a mountainside bathed in sunrise. An alpine lake happened upon after a long hike; a sea of undulating prairie grasses; a waterfall – almost any waterfall – these are psalms for the human heart.
And how do humans get to experience them? Probably by burning nature’s hydrocarbons, drinking its water, and somewhere along the way employing its minerals and timber in support of life and livelihood. We may drive a hybrid, choose organic vegetables, and scrupulously recycle, but even the greenest of us has to admit that natural resources feed the superstructure of the civilization in which we live.
We are John Muir when we take a weekend walk and are awestruck by an encounter with a fawn. We are Gifford Pinchot when the alarm goes off on Monday morning. Every one of us balances purist aspirations with practical needs.
In a Monitor cover story, Todd Wilkinson explores that balance, focusing on ranching in the West. He introduces us to a new generation of ranchers who are concentrating on sustainable practices. Cattle lands that once would have been trampled and depleted are being managed in smart new ways that decrease the environmental impact and allow the region’s native flora and fauna to thrive.
Unlike the “sagebrush rebels” of a generation ago who saw environmentalism as silly and intrusive, these green ranchers consider healthy land and water crucial to current and future generations. As one rancher tells Todd: “Lots of different people talk about ways that agriculture needs to be sustainable, but we are living it.”
Todd knows the West. In a cover story last summer, he examined the complexities and tensions that have accompanied the return of wolf populations in the region. Attacks on cattle or sheep have been of particular concern. His green-ranching report is, in a sense, a follow-up. Sustainable ranching, it turns out, may offer a solution: When wildlife such as deer can find clean water, rich forage, and adequate cover on ranches, they flourish – and wolves have a shot at traditional prey rather than going after livestock.
Green ranching is where Muir and Pinchot blend. Humanity and nature can’t be separated. But nature can be handled with care.
John Yemma is editor of the Monitor.