Uber‘s been let off the hook by prosecutors in a case involving one of the company’s free test cars that fatally struck a banal in Tempe, Arizona last year, letters Reuters. However, the back driver behind the wheel will likely be referred to local police for added investigation, and that could potentially see him face accuse of vehicular assassination for the adventitious killing of Elaine Herzberg.

The news should serve to beforehand our cerebration on one of the better questions we’ve had for years about self-driving vehicles: who is at fault when a self-driving car gets in an accident? We might be a bit closer to the answer than a few years ago, but we don’t yet have the entire rulebook to decide who to blame in such instances.

The notion of accountability in such incidents has been discussed in accessories dating back to 2013. The afterward year, John Villasenor wrote in The Atlantic that “manufacturers of free agent technologies aren’t really so altered from manufacturers in other areas. They have the same basic obligations to offer articles that are safe and that work as declared during the business and sales process, and they have the same set of legal exposures if they fail to do so.”

Describing how we deal with accidents involving acceptable cars, Villasenor acicular to vehicular issues that could be traced back to automakers’ apathy or accomplishment defects – in which case the aggregation that produced the car could be held responsible.

Now, accede The Economist’s breakdown of what went wrong in the Elaine Herzberg case, based on the report from the US National Busline Safety Board (NTSB).

You can read the entire account here; it boils down to a system design failure, in which the car’s acumen module got abashed and didn’t accurately allocate the object in front of it as a pedestrian. Next, the automatic braking system didn’t anticipate the collision, and instead appropriate the human abettor behind the wheel to hit the brakes.

The driver, Rafaela Vasquez, was allegedly watching a TV show on her phone at the time, and failed to notice Herzberg in time to stop the car from barreling into her. While it seems like this is absolutely on Vasquez, accede the fact that Yavapai County Attorney’s Office, which advised the case, did not explain why it didn’t find Uber liable for the accident.

It’s worth noting that this case – in which the agent made a mistake, appropriate the human driver to take action, and the driver failed to do so – complex a test vehicle, which means the technology hasn’t yet been proven to work after fault, and hasn’t been made accessible for use to the accepted public.

Now, accede a real-world scenario. Based on events as recent as yesterday, we know that if you tell car owners their cartage can drive themselves, some of them will put that technology to the test by risking their lives. There have been abundant instances, dating back at least as far as mid-2016, of people sleeping in the driver‘s seat of their Teslas while in motion on public roads.

So should we worry about driverless cars accepting us in agitation en masse when they become widely available? Probably not. The accidents we’ve seen thus far have, naturally, complex test vehicles, which are close to what’s known as Level 5 Autonomy – but not quite.

Ideally, humans shouldn’t have to lift a finger when riding a self-driving vehicle. And not to defend, Vasquez, but they also shouldn’t have to hit the brakes within a moment’s notice if an free system fails to do so at the very last moment – that’s not how people learn to drive. That also leaves room for error, and in a way, it defeats the purpose of driverless agent technology.

Level 5 Autonomy implies that cartage should abutment fully automatic driving, and not crave human action during the course of a trip. To attain that sort of hands-off reliability, you need  adult sensors for cartage to detect obstacles, as well as automatic systems accomplished on millions miles of test roads to fine-tune how these cartage will react in every accessible scenario.

webrok

Additionally, these cartage could also account from V2X systems (vehicle-to-everything communications, which is an awning term for systems that let cartage talk to other vehicles, cartage infrastructure, toll booths, and more) to accept advice about abeyant dangers and active altitude beyond their line-of-sight. While all these technologies are in the works and being tested in earnest, we simply aren’t there yet.

Before self-driving cars take over our roads, we’ve got a lot more work to do in terms of perfecting the technologies that will drive them, cartoon up and administration safety standards for free vehicles, and ambience astute expectations for how this mode of busline will work. After all that, we won’t be able to assure ourselves from our own mistakes.

Read next: Call Control gives annoying robocalls the boot for only $20 a year