Anyone who has ever wanted to speak a foreign language should meet Markus Baur, a research programmer at the Janus Project of Carnegie Mellon University in Pittsburgh. He is talking to a computer:
"I'm actually out of town. How about Monday?"
On-screen, the machine types out what it thinks it heard, spits out a paraphrase, then types and says out loud a German translation: Guten Tag, Knnen Sie Montag treffen.
That first part isn't right, but the machine gets the second sentence. So Mr. Baur keeps the conversation going: "Monday afternoon sounds great. How about 2 o'clock over at my place?"
A few seconds later: "Ja, Montag Nachmittag geht es bei mir ganz gut. Knnen Sie um zwei Uhr treffen?"
Baur: "OK, let's meet Monday afternoon."
Computer: "OK Montag Nachmittag."
Speech-recognition programs are already moving into the consumer market. In the laboratory, scientists are pushing this technology in several powerful directions, including real-time translation. The Janus Project is one of many worldwide efforts to bridge the language chasms that separate the people of the world.
It's not an easy assignment for several reasons. First, the computer has to decipher continuous speech (the way people really talk) as opposed to the pauses - that - consumer - oriented - dictation - programs - still - require. Also, it has to understand people immediately no matter what accent they have. The current consumer-grade programs train themselves to recognize a certain speaker. Then there are what the scientists euphemistically call "spontaneous effects in speech."
These are the hesitations and false starts, the ungrammatical sentences and colloquial expressions, which people use in everyday speech. Cough, and today's consumer-grade dictation software will try to type out a word. The Janus Project's software instead tries to recognize the "ums" and "ers" of conversation so it can throw them away. It even has a code for lip-smacking.
Real-time translation requires a number of complicated steps. The computer program has to hear the stream of sound accurately (which is tougher than it might seem). It has to cut that sound up into words, then extract the meaning from those words.
We don't think it's hard because we practice these skills every time we talk to another person. "You sometimes get a sense of how complex a skill it is when you meet someone from a different culture," says Alexander Rudnicky, a senior systems scientist at another speech-recognition lab at Carnegie Mellon. "Even if they know English, it's very difficult to communicate, because there are a whole lot of assumptions built in."
And if a foreigner has trouble picking up on the local mannerisms and colloquialisms that make communication easier, computers have a much harder time.
The Janus software does not make direct translations into a foreign language. Instead, it converts all phrases into its own language, called Interlingua, then translates them into any one of the seven languages it currently handles, including English, German, Spanish, Japanese, and Korean. Within a year, the researchers hope to add 13 new languages, including Arabic and Russian.
As impressive as this sounds, the software still only has a 5,000-word vocabulary and it gets lost if people start speaking about something other than scheduling a meeting. It will also take time for the software to get small enough and the hardware powerful enough so people can turn a notebook computer into a portable translating device.
But scientists are working hard in all these areas. The universal translator is coming. Ja.
* Send comments to firstname.lastname@example.org or visit my In Cyberspace forum at http://www.csmonitor.com