Artificial intelligence passes the Turing test of penmanship

The program can recognize handwritten drawings after only viewing the figures a few times, and also passed a basic Turing test.

Students study the ancient language Sanskrit, at the Massachusetts Institute of Technology (MIT) under the instruction of Pallamraju Dugairala in this file photo from May 9, 2007.

Joanne Ciccarello

December 13, 2015

The learning gap between humans and machines is closing.

Sanskrit, Tibetan, Gujarati, and Glagolitic were among 50 handwritten languages researchers used to test a computer program that proved to be as good, or better, than humans at recognizing the figures – a cognitive step for machines, and a leap forward for the potential that coders could build more sophisticated Artificial Intelligence (AI) in the future.

The program, developed by three researchers whose findings were published last week in Science, can recognize handwritten drawings after only viewing the figures a few times and also passed a basic Turing test.

In Kentucky, the oldest Black independent library is still making history

“For the first time, we think we have a machine system that can learn a large class of visual concepts in ways that are hard to distinguish from human learners,” study coauthor Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology, said in a call with reporters.

The program – which the researchers call the Bayesian program learning framework, or BPL – demonstrated to the scientists that BPL could perform in some instances even better than people.

One test, in which both humans and the computer program were shown an image of a new character once, and then were tasked with picking another example of that same character from a set of 20, showed the program was up for the challenge.

People performed well, with an average error rate of 4.5%, but BPL beat out humans with an error rate of 3.3%.

On a Turing test – a thought experiment devised in 1950 by British computer scientist Alan Turing to compare a machine's ability to think to that of a human – BPL consistently performed well. People could not tell the difference between figures drawn by the computer or human participants.  

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

The researchers are hopeful that such performance may have applications for other systems that rely on symbols, like gestures, sign language, and the written word, and could even help teachers better understand how young students – new to language – learn.

Dr. Turing predicted that when computers are able to pass his test, "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

As The Christian Science Monitor previously reported, “In keeping with the reigning behaviorist ethos of 1950, Turing used the word 'think' to refer not to internal mental states, but to measurable outward actions. Turing rejected the objection that a machine could not be said to think because it wouldn't feel like anything to be a machine, pointing out that, epistemically speaking, one has no way of knowing for certain that anyone other than oneself experiences feelings.”

Someone who has thought deeply about the implications of thinking machines is Elon Musk, the billionaire investor and founder of Tesla and SpaceX, who has now made two significant donations to monitoring AI development.

At the start of 2015, he put $10 million toward the Future of Life Institute (FLI), a “volunteer-run research and outreach organization working to mitigate existential risks facing humanity.”

And here at year’s end, closely following news of a new Turing-passing program, Mr. Musk, along with other founders from PayPal and YCombinator, among others, committed $1 billion to the new OpenAI foundation.

“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the OpenAI team said, in a statement announcing the large capital gift. “Since our research is free from financial obligations, we can better focus on a positive human impact.”