What will artificial intelligence look like in 15 years?

As the conversation about artificial intelligence grows louder, public perception of its eventual integration into every day life has shifted from general fears to more specific questions about implementation.

|
Shizuo Kambayashi/AP
FILE - In this July 17, 2016, file photo, shoppers talk to SoftBank Corp.'s companion robot Pepper, equipped with a "heart" designed to not only recognize human emotions but react with simulations of anger, joy and irritation, at a store in Tokyo.

Whether they are assisting your doctor in surgery, driving your car, analyzing crime patterns, or cleaning and providing the security system for your home, artificial intelligence (AI) will play a big role in urban living in by 2030. But to maximize the benefits of an AI-wired city tomorrow, expert and the public need to have a frank conversation today, according to the first report from Stanford University's 100-year study on AI, which was released last week.

"As a society, we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote, not hinder, democratic values such as freedom, equality, and transparency," the panel states in its report, which analyzes the role AI will play in the typical North American city in 2030, focusing on eight domains: transportation, home robots, healthcare, education, entertainment, low-resource communities, public safety and security, and employment and the workplace.

"Policies should be evaluated as to whether they foster democratic values and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few."

The report outlines not only the ways AI could potentially be used throughout everyday life, but also how public opinion surrounding the implementation of AI has changed, and will continue to.

"In each domain there is a high potential for artificial intelligence technologies to improve the quality of life in the typical north American city by the year 2030, but in each case there are barriers to overcome, and [in] some more than others," Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts, tells The Christian Science Monitor.

With autonomous cars immanently on the horizon – but still getting into accidents – and new Federal Aviation Association (FAA) rules over drone use, we have already encountered examples of both the barriers and the solutions: some are technological, like meeting safety standards, but others are social.

"There are definitely people who bring up fears that are spurred on by science fiction literature and movies, but then we also get a lot of interest in what we are doing," Dr. Stone tells the Monitor. "AI tends to be very polarizing: some people tend to be very excited about it, others are very fearful, and sometimes the same people have both of those different attitudes."

Often, doubts about AI's integration into society take on a dystopian tone. However, more mundane topics like job loss and inequality have come to dominate the conversation, Stone says. 

"Throughout history technological advances have affected the workplace," Stone tells the Monitor. "In the perceivable future, most jobs will not be replaced by AI technologies, but will be augmented or changed. The healthcare advances are not going to replace doctors, but they may change the skills that the doctors need or how doctors spend their time."

The question of exacerbating already existing inequalities is more difficult. However, the report recommends the immediate start of a discussion on how the additional wealth created by AI can be spread equitably and fairly – a point often overlooked by worries of robots displacing human workers. AI methods can help plan equitable food distribution, for example, or to spread health and safety information.

"Care must also be taken to prevent AI systems from reproducing discriminatory behavior, such as machine learning that identifies people through illegal racial indicators, or through highly-correlated surrogate factors, such as zip codes," the panel notes. "But if deployed with great care, greater reliance on AI may well result in a reduction in discrimination overall, since AI programs are inherently more easily audited than humans."

But AI's potential to help all communities, not just wealthier ones, needs to be talked about in order to boost trust in new technologies, too, helping build relationships to ensure they can be implemented in the most helpful way possible down the road. Building that trust has been challenging when AI debates seem theoretical, but that is quickly changing.

Stone says that there are different pathways to trust. There are technological solutions that would prove the machine's reliability, such as a drivers license test-type screening that AI machines must pass before being deployed for public use. However, Stone believes that something as simple as increased exposure to AI will ultimately be most successful.

"People need to have first hand experience with them," Stone tells the Monitor. "I trust the various applications on my computer not because someone certified them, but because I have used them hundreds of times and I have seen the same behavior over and over again. Once people start seeing autonomous cars on the road and get experience with them stopping at the right time and starting at the right time, that will add to the trust." 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to What will artificial intelligence look like in 15 years?
Read this article in
https://www.csmonitor.com/Technology/2016/0906/What-will-artificial-intelligence-look-like-in-15-years
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe