I’m still waiting for the first accident involving an “autonomous vehicle.” Not because I want this particular set of technologies to fail, but rather because it will set off a discussion that currently is only playing out at the sidelines in Special Interest Groups and Industry Consortiums but which really needs a broader focus and much broader input.

The question at the heart of the first accident with a self driving car is going to be: who is to blame?

Who is to blame?

Was the accident triggered by a malfunctioning algorithm that controls the car? Bugs are everywhere and unavoidable, after all, and who is to say that the software driving your car should be without fault? Or was the accident cause by the traffic infrastructure sending the wrong signals? Maybe the traffic light signalled red, but the electromagnetic signal that the car relies on was configured wrongly.

Maybe there was interference on the radio-frequency spectrum. Maybe even intentional interference. Did the car manufacturer integrate the software that drives the car correctly? And what about attendant data? Did the GPS unit report wrong information and the car thus misjudge the velocity it was travelling at? What about the driver? Did they enforce some kind of out-of-bound situation that the car just wasn’t able to cope with?

So really, who is to blame?

Early warning

The ensuing discussion around that question will be instrumental in shining a light on the coming world of the internet of things. Because that self-driving car might just be an isolated incident, but at the same time is an early indicator of a much broader trend: networked and pervasive actuators.

The roll-out of distributed sensor networks is just the first step of the Internet of Things — that part, where the Internet learns about the world. Ultimately, though, we want stuff to automatically act upon the information that were gathered, and that means rolling out actuators. That might just be automatic door-locks that open when someone authorised is geographically near, but it might as well be traffic signalling and guidance systems, and autonomous vehicles.

Lawyers love this

However, the big question around that roll-out is one that lawyers love: the question of liability.

Because, when—and it’s going to be when, not if—bad stuff happens, it’s not as easy anymore to determine who fucked up.

As hinted upon earlier, there’s plenty of moving parts that determine how an actuator will react in a given situation, and of course, the more complex the interactions of the actuator with the world, the more complex the surrounding dependencies and conditions that actuator relies on.

It is said that Google’s self-driving car currently generates around 1 GB per second.

But what is ensuring that all that data is accurate? And what happens if you network those cars, that is to say, cars rely on data generated by other cars. There are plenty of attendant considerations to pay attention to: can you ensure data is coming from where data claims it’s coming from? How do you ensure it’s an accurate reflection of reality? How do you make it tamper-proof, but at the same time interoperable? Open even?

Beyond Privacy

We’re currently talking a lot about the privacy implications of the Internet of Things, and in the current political climate, that is one conversation that is about time we had. We should, however, not lose sight of the bigger picture.

We’ve already created systems that are not legible by humans anymore. Kevin Slavin’s talk about “How Algorithms rule our world” hopefully has made the round far enough. All that information that we collect or start collecting about the world – it wants to be acted on. Preferably in the real world. We need to think about liability chains, and data provenance, if we want this to be in any way manageable.


Leave a Reply