Fancy math takes on je ne sais quoi

English rules the Internet, which can be a frustrating thing for the world's 1.3 billion Chinese and 322 million Spanish-speakers. They outnumber Anglophones. Even online, two-thirds of users speak something other than English at home.

So when someone promises a smoother and easier translation program, people around the world tend to perk up their ears. It's a step closer to a truly "worldwide" Web where every page would be available for everyone to read in his or her own language.

The latest step comes later this month when the National Institute of Standards and Technology (NIST), an arm of the United States government, announces results of its tests of several machine-translation systems. The agency is expected to give top honors, not to the linguistic-savvy programs at universities and elsewhere, but to a newcomer: Internet search company Google. Google's apparent success suggests that a new approach to translation - fancy math rather than linguistic know-how - may be the way forward in a field that has struggled with the nuance and ambiguity of human language.

"Nobody in my team is able to read Chinese characters," says Franz Och, who heads Google's machine-translation (MT) effort. Yet, they are producing ever more accurate translations into and out of Chinese - and several other languages as well.

To demonstrate the software's prowess, Mr. Och displayed an Arabic newspaper headline at a recent media tour of Google's headquarters in Mountain View, Calif. One commercially available MT program translated it: "Alpine white new presence tape registered for coffee confirms Laden." Then he displayed the translation from Google's prototype, which made considerably more sense: "The White House Confirmed the Existence of a New Bin Laden tape."

Of course, every MT program can point to strengths in its approach versus weakness in others', experts say. The key is whether statistical systems have become powerful enough to outperform the intensive, rules-based systems now available.

"These translations were impossible a few years ago," Och says. But the advent of ever-cheaper and faster data-crunching and the mushrooming number of online documents have changed the equation. Google has improved the algorithms for its MT program, he says, by feeding its computers the equivalent of 1 million books of text, using sources such as parallel translations of United Nations documents.

Google's MT system is still under development and not available to the public. Talking about it at an event for journalists and industry analysts may mean that at least a test version will be coming in the next few months, observers speculate.

"The results were very impressive, not the stupid machine translation you see on the Internet, which isn't really good," says Philipp Lenssen, who's been writing about Google in his online blog, Google Blogoscoped, since May 2003.

"This opens up a lot of new possibilities because you don't really want to read machine translation at the moment," Mr. Lenssen says. He speculates that it could be a perfect part of a Google Web browser, should the company decide to release one. A user might search the entire Web in his native language and have pages returned to him already translated. "You can apply it to so many situations," he says.

Many translations, one root

Today, nearly every translation service offered on the Web - AOL, Alta Vista, Babblefish, even Google's - is powered by translation technology developed by Systran. The company, based in San Diego and Paris, has been involved in MT for more than 30 years. Each day, it translates more than 25 million Web pages.

MT involves years of hard work creating rules for translation between a pair of languages, says Dimitris Sabatakakis, chief executive officer of Systran. Using statistical methods, such as Google does, is a well-known technique. "There is no technology breakthrough," he says. "Everybody does the same."

Machine translations, he says, work best if the original text is written with care to make it easily translatable, avoiding problematic or ambiguous words and phrases. More and more websites, especially those interested in e-commerce, are trying to create text that is easily translated, Mr. Sabatakakis says. Though machine translations are often less than perfect, he says, they're still useful to gain a quick idea of what a website is all about.

Today, Systran offers translations between 40 language pairs, and in the next 12 months it will add 40 more, he says.

Each of the two approaches to MT - hand-tailoring rules for translation between pairs of languages or using statistical analysis to detect patterns - has its strengths and weaknesses, says Robert Frederking, who teaches at the Center for Machine Translation at Carnegie Mellon University in Pittsburgh.

Rules-based systems are time-consuming to develop and expensive, but great for specialized tasks, such as translating a manual on bulldozers, which might have a number of specific and unique terms. "Systran has put literally hundreds of person years over a 30-year period into building each language pair that they translate," Dr. Frederking says.

Statistical systems have yet to prove that they can produce superior translations, says Frederking, who hasn't seen the results of the most recent NIST evaluations. But doing well at NIST means more than showing off a few specific examples of better translations to reporters, he says.

Even evaluating the quality of translations is difficult and expensive, Frederking says. Since 2002 NIST has used a computer program called Bleu to do its evaluations. It works "reasonably well," he says.

Unofficially good

The results of the NIST evaluation won't be released until later this month. "Google did do very well," says Mark Przybocki, the machine-translation project coordinator at NIST, without confirming Google's score. Some 20 research groups asked to be evaluated, each trying new techniques not yet in commercial use. Each group was given 100 news items to translate from Arabic and Chinese into English.

Both rules-based and statistical MT systems can stumble badly on such generalized reading. One problem is the vast and changing vocabulary. One analysis of The Wall Street Journal, Frederking says, found that 1 or 2 percent of each edition consists of words that have never before appeared in the paper. A statistical principle called Zipf's Law holds that with so many words available, nearly every article will have some uncommon words, he says. Unless statistical MT programs have seen these words in many previous contexts, they can mistranslate them.

Proper nouns are a special challenge. Crooner Julio Iglesias, for example, shouldn't be translated as July Churches, the literal English translation of his Spanish name. An MT system should be able to spot which words are names and not translate them, he says. But even that doesn't help, if the translation is from Japanese or Chinese characters. "You have to translate them into some kind of Latin letters," he says.

Frederking predicts that eventually rules-based and statistical methods will merge, with some knowledge of grammar and syntax being added to the statistical approach, making for translation programs that are both broad and deep.

Meanwhile, Google's announcement that it's working on a better MT system creates interest in the field "and that's a good thing" says Sabatakakis of Systran. But "we know that there are no magic solutions. You don't learn a language with statistical methods."

Countries with the most Internet users (in millions):

1. United States: 185.6

2. China: 99.8

3. Japan: 78.1

4. Germany: 41.9

5. India: 37.0

6. Britain: 33.1

7. South Korea: 31.7

8. Italy: 25.5

9. France: 25.5

10. Brazil: 22.3

Source: CIA World Factbook

You've read  of  free articles. Subscribe to continue.
QR Code to Fancy math takes on je ne sais quoi
Read this article in
https://www.csmonitor.com/2005/0602/p13s02-stct.html
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe