Artificial intelligence passes the Turing test of penmanship

The program can recognize handwritten drawings after only viewing the figures a few times, and also passed a basic Turing test.

Joanne Ciccarello
Students study the ancient language Sanskrit, at the Massachusetts Institute of Technology (MIT) under the instruction of Pallamraju Dugairala in this file photo from May 9, 2007.

The learning gap between humans and machines is closing.

Sanskrit, Tibetan, Gujarati, and Glagolitic were among 50 handwritten languages researchers used to test a computer program that proved to be as good, or better, than humans at recognizing the figures – a cognitive step for machines, and a leap forward for the potential that coders could build more sophisticated Artificial Intelligence (AI) in the future.

The program, developed by three researchers whose findings were published last week in Science, can recognize handwritten drawings after only viewing the figures a few times and also passed a basic Turing test.

“For the first time, we think we have a machine system that can learn a large class of visual concepts in ways that are hard to distinguish from human learners,” study coauthor Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology, said in a call with reporters.

The program – which the researchers call the Bayesian program learning framework, or BPL – demonstrated to the scientists that BPL could perform in some instances even better than people.

One test, in which both humans and the computer program were shown an image of a new character once, and then were tasked with picking another example of that same character from a set of 20, showed the program was up for the challenge.

People performed well, with an average error rate of 4.5%, but BPL beat out humans with an error rate of 3.3%.

On a Turing test – a thought experiment devised in 1950 by British computer scientist Alan Turing to compare a machine's ability to think to that of a human – BPL consistently performed well. People could not tell the difference between figures drawn by the computer or human participants.  

The researchers are hopeful that such performance may have applications for other systems that rely on symbols, like gestures, sign language, and the written word, and could even help teachers better understand how young students – new to language – learn.

Dr. Turing predicted that when computers are able to pass his test, "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

As The Christian Science Monitor previously reported, “In keeping with the reigning behaviorist ethos of 1950, Turing used the word 'think' to refer not to internal mental states, but to measurable outward actions. Turing rejected the objection that a machine could not be said to think because it wouldn't feel like anything to be a machine, pointing out that, epistemically speaking, one has no way of knowing for certain that anyone other than oneself experiences feelings.”

Someone who has thought deeply about the implications of thinking machines is Elon Musk, the billionaire investor and founder of Tesla and SpaceX, who has now made two significant donations to monitoring AI development.

At the start of 2015, he put $10 million toward the Future of Life Institute (FLI), a “volunteer-run research and outreach organization working to mitigate existential risks facing humanity.”

And here at year’s end, closely following news of a new Turing-passing program, Mr. Musk, along with other founders from PayPal and YCombinator, among others, committed $1 billion to the new OpenAI foundation.

“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the OpenAI team said, in a statement announcing the large capital gift. “Since our research is free from financial obligations, we can better focus on a positive human impact.”

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.