Driverless technology is the future of transportation, whether we like it or not. With much of the technology already available, one must ask why we aren’t making the final leap right now.
When it comes to defining driverless vehicles, there are six levels of automation, as defined by SAE International. Traditional vehicles, with no automated features at all, are level 0.
At the moment, there is a strong debate about how autonomous such vehicles should be. Many people argue for SAE level 3. This refers to conditional and situational automation, wherein the driver is expected to take control when the onboard system requests it. On the other hand, SAE level 5 technology refers to full automation where the human element is removed entirely.
Today’s Tesla’s autopilot and self-parking features are at level 2, as these vehicles still currently require the human to drive (although this is often due to legal challenges, as the company claims it has the hardware in place to support higher levels). The current debate, however, is whether level 5 is safe. After all, a car that drives itself doesn’t require a trained driver but, if something does happen, having an experienced and licensed driver on hand can ensure a human response when (or if) it is needed. It’s this debate between level 3 and 5 (with level 4 simply being a more automated step from level 3) that is a central issue in regulating autonomous vehicles.
Under the current car ownership model, a car is often owned and driven by one driver (and perhaps additional family members). After all, if you drive to and from work, your car is out of use for everyone else, right? This is the case for cars up to level 3 and 4, as they can’t remove themselves.
But what if a level 5 car could take care of itself while you’re at work? In this situation, you do not need to exclusively own the vehicle. Many big manufacturers, such as BMW and Ford, are already exploring this area, with cars being ‘rented’ out rather than directly owned. Likewise, Tesla has suggested its own network, wherein your car can be hired when you’re not using it.
More: Ford Will Give Ridesharing Services A Fully Autonomous Car in 2021
Of course, this has some practical issues. If your car is used by others, you might want to be careful about what valuables you keep in the vehicle, or in what condition the interior is returned. If you only drive at set times, the hiring system can be worked around your schedule. However, it requires the car to be highly autonomous, so that it can get from one user to another. Existing car share or ride-along networks, such as Uber and Lyft, are already investing in autonomous technology for this very purpose.
The question is whether drivers will be ready to accept this change. Knowing you always have a vehicle available provides a sense of security, but an efficient system of vehicle rental and autonomous usage could convince drivers to buy in.
Insurance And Liability
Next to the issue of ownership is, of course, the issue of liability. The owner of a car must insure it. So, if we start using a fleet or rental-based model, manufacturers become more responsible for their vehicles. Fortunately, this is something many big companies seem prepared to offer. When it comes to responsibility, numerous car manufacturers, as well as Google, have already claimed they will be held accountable for cars driving in autonomous mode.
But there is also the issue of the driver’s liability. A level 3 or 4 car still requires a driver at all times, for those instances when human input is needed. A level 5 vehicle doesn’t need someone in the driver’s seat so, in theory, could someone without a driver’s license own or rent such a vehicle?
Maintenance and Servicing
Vehicle owners are responsible for ensuring the car is in a roadworthy state. How does this work when the car is shared between users or directly owned by the network?
It’s worth mentioning here, of course, that such cars may still require a human element when it comes to maintenance. While a car can often detect faults – modern equipment can easily measure tyre pressure, for instance, monitor brake degredation, suspension and other areas – it’s not equipped to fully diagnose or make the repairs itself. That said, a full, level 5 vehicle could theoretically identify these issues and drive to an appropriate garage for repairs. A level 3 or 4 car could alert the driver, but it would still require someone to make this decision to service the vehicle.
The same also applies for EV charging. An autonomous car can easily find a charging station, but only a wireless charging infrastructure or human attendants at a plug-in charging station could facilitate vehicle self-charging. The issue here is the nature of the ownership model. When a network or car manufacturer is responsible, they will have few objections to payments, but how comfortable are drivers with letting their car spend their money?
One of the biggest arguments against driverless cars is the question of what a robot or computer program will do when faced with a moral dilemma. One of the more common examples of moral dilemmas is what a smart car, driving itself, will choose when faced with injuring two different groups of human beings. If an outcome must be chosen, will it choose to save the larger group, take the action that has the highest potential of not injuring someone, or save the driver?
Human drivers act on reflex, applying emergency braking as soon as they can. While a machine can react faster, certainly, we have to accept that a machine will look at different outcomes and, based on its programming, make a decision that a human might not have chosen. That being said, it’s possible a level 5 car could be trained to recognize and make these decisions: after all, not every driver is trained sufficiently to handle such a situation. While the hardware exists, this is an issue of developing the software to be able to perform reliably in life-threatening scenarios.
Finally, there are some driving environments that smart, autonomous cars simply aren’t cut out for. A driverless car can easily read a city environment in good weather, as this involves clear, straight passageways with road markings (and traffic lights) that are easy to read.
However, the countryside or bad weather poses a bigger challenge. Many roads are completely unmarked, signs may be obscured, and there may be mud or dirt covering road markings. Heavy rain or snow could make the autonomous driving task nearly impossible. For a human driver, this is something that we learn to adapt to (to degrees of success), but a machine can only operate according to its programming. If that programming is not prepared for these imperfect conditions, the car will become unsafe.
In terms of hardware, it appears that driverless cars are realistically possible and, as many already argue, inevitable. However, the software still has flaws and the legal battles are ever-raging. Perhaps most importantly, how much are drivers willing to trust a car they have little or no control over?