Lessons from the latest Google self-driving car crash

After a van ran a red light and hit an autonomous Google car, some are wondering if the two types of vehicles are ready to coexist.

A Google self-driving car goes on a test drive near the in Mountain View, Calif., in 2014.

Eric Risberg/AP/File

September 26, 2016

A collision involving a self-driven Google car and a human-operated vehicle that occurred late last week has raised the question of how well human and computer drivers can share the road.

On Friday, a Lexus guided by Google's autonomous driving system entered an intersection in Mountain View, Calif., when it was hit from the side by a van running a red light. While no injuries were reported, those present have deemed it the worst crash involving an autonomous Google car yet.

"Human error plays a role in 94 percent of these crashes, which is why we're developing fully self-driving technology to make our roads safer," a Google spokesperson told ABC News in San Francisco.

In Kentucky, the oldest Black independent library is still making history

This isn't the first high-profile crash involving a self-driving vehicle. Earlier this year, two Tesla semi-autonomous cars were involved in crashes, one of which proved fatal. In those incidents, system blindspots and failures that could potentially be remedied by further innovation led to the crash. In the Google crash, however, human error on the part of the other driver led to the collision, leading many to question how safe vehicles driven by computers can be when they're sharing the road with less-efficient human drivers who are more likely to take risks and violate traffic laws.

Both officials and manufacturers have responded to uncertainties surrounding the technology by increasing innovation and regulations around the technology. Tesla has expanded safety features on its vehicles, including an improvement earlier this month that allows a car to use radar in addition to cameras to navigate itself, and last week the federal government rolled out a list of 15 benchmarks the cars must meet before hitting the streets.

But other safety concerns remain outside of those areas, including the role that human error still plays in crashes.  

In Friday's California crash, the Google vehicle waited six seconds after the light had turned green before proceeding into the intersection. Despite the damage the van inflicted on the car's passenger side door, neither of the two Google employees in the car sustained serious injuries.

Of some two dozen crashes involving Google cars in the past several years, only one occurred because of a system error, Google officials have said, and many involved a distracted driver rear-ending one of the Google vehicles.  

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

While the computer-operated vehicles may have a better track record, they'll still have to share the road with less than perfect human drivers. Experts estimate that replacing all vehicles with autonomous ones could take until 2060, and until then, plenty of errors can be expected from humans behind the wheel.

"The clear theme is human error and inattention," Chris Urmson, the former head of Google's self-driving program, wrote in a blogpost last year. "We'll take all this as a signal that we're starting to compare favorably with human drivers. Our self-driving cars can pay attention to hundreds of objects at once, 360 degrees in all directions, and they never get tired, irritable or distracted."