Levels of Self-driving cars and Punishing robots

There are Levels to this Self-driving thing

Straight to business. Last week, I had worked on a project on Autonomous Driving Systems (ADS), which we commonly refer to as self-driving cars. Did you know there are levels, like literal levels, when it comes to self-driving cars?

A Level 0 car has no automation and most cars on our roads fall into this category.

Level 1 cars have specific functions that assist the driver like Lane Keeping Assistance (LKA) to keep the car on a particular lane and Adaptive Cruise Control (ACC) to regulate speed. Honda Civics, Jeeps, and BMWs are Level 1 cars.

Level 2 is partially automated such that the vehicle can accelerate, decelerate, steer, and brake on its own.

Level 3 is conditionally automated such that the car can fully operate itself and alert the driver to take over in situations when he should take over. Vehicles at Level 3 are Tesla cars (with autopilot systems), General Motors (supercruise technology), and the Audi A8.

Level 4 has high automation and can carry out all functions and emergency responses. It does not need the driver to be in the driver’s seat. There are no cars in this stage but there prototypes in development like Aston Martin’s Lagonda Vision, Renault’s Symbioz, and Alphabet’s Waymo.

Level 5 will not involve a human driver and will fundamentally redesign the car as we know it with no room for a driver. This is a development for the future.

Liability vs. Innovation

I read the paper titled, ‘Punishing Robots — Issues in Economics of Tort Liability and Innovation of Artificial Intelligence. At a point, it got mathematical, but the crux of the writer’s argument was the increased risk of liability doesn’t necessarily hamper innovation, which is the popular assumption. What plays a stronger role is the potential returns. Most manufacturers always factor in liability and insure against it when producing, so it doesn’t necessarily hold them back from innovating. What plays a greater role is the potential rewards or profit. If the rewards are significantly higher than any potential liability fallouts, then it’s worth investing in. If it isn’t, then that product may never see the light of day. The writer also proposed that when self-driving cars go mainstream, there should be special driving licenses and retraining programs for users of these types of cars.

What do you think about all this? I’d like to hear.

Analyst/Emerging Tech Lead, Tech Hive Advisory | AI Ethics & Governance Researcher