Without a doubt, one of the most fascinating pieces of tech people are talking about is driverless cars. The concept of a driverless car has been speculated for years, and whilst they aren’t around for purchase yet, they might not be as far away as you think.
Once confined to the realm of science fiction, driverless car technology is finally on the horizon, with large companies like Tesla investing in making the fantasy a reality. However, not all of the technology required to make this happen is new. Some components exist in a fledgeling form, and some are already well-developed but need to be even more intuitive before the driverless car can be allowed to take on the hazards of the open road.
In this blog, we’ll be looking at the five main components that intersect to make up driverless car technology, as outlined in David Silver’s recent Ted Talk.
Computer vision replaces the human eye in driverless cars and needs to be able to recognise everything we do, such as lane lines, other vehicles and obstructions in the road. While high-quality cameras can see even further and more accurately than we can already, this technology needs to work with the CPU to react to what it is seeing, like braking or switching lanes to avoid other vehicles.
Sensor fusion is the cohesive functioning of all of the sensors in the driverless car – such as laser range finders, bumper-mounted radars and ultrasonic sensors – so that it is able to understand and react to its surroundings. Some sensors will work better in adverse weather, and others are better at measuring distance and velocity. In a situation where there is very little room for error, the data from all of these sensors need to be integrated seamlessly.
GPS technology has been around for a long time and is plenty good enough to be used when walking or even driving when we’re at the wheel ourselves. However, when it comes to driverless car technology, GPS needs to be much more accurate. The GPS we use on our phones is accurate within 1-2 meters, but as that margin of error is too large for driverless cars to be viable, the GPS they use is accurate within 1-2 centimetres by using sophisticated algorithms and triangulation.
Path planning encompasses the GPS technology used for our current Sat Navs to plan a route with the real-time adjustments of changing lanes, avoiding obstructions and navigating around other vehicles via manoeuvring, acceleration and deceleration. Our current navigation systems show us how to get from point A to point B, with the lanes we need to be in and the turns we need to take. However, we still have to make all the necessary adjustments to avoid other vehicles and obey road signs, even if we’re travelling at a set speed. Driverless car technology needs to mimic that behaviour without us at the wheel.
Control is where everything comes together. If all of the other components are providing data in the same way our brain reads and responds to external stimuli whilst we follow a route, then the control is our body actioning that information. Turning the wheel, accelerating and braking all need to be executed by the car in response to the data it is given by its sensors and its GPS in order to follow its trajectory safely.
This execution isn’t so simple. Just as there are good drivers and bad drivers, the driverless car technology needs to be sophisticated enough to know the adjustments that need to be made on different terrains, such as in adverse weather, on sharp bends and on poor road surfaces.
What does the future hold?
We are well on our way to putting the auto in automobile, but much of the software still needs to be developed before driverless cars hit the road. According to BMW, experts have defined five levels in the evolution of autonomous driving. These are:
1 – Driver Assistance – where the vehicle’s systems support the driver but do not take control. This includes features like parking assistance and navigation systems.
2 – Partly Automated Driving – where the systems can take control, but the driver still operates the vehicle. This includes features like cruise control and steering assistance.
3 – Highly Automated Driving – where the driver can stop driving for extended periods of time and the vehicle will take over.
4 – Fully Automated Driving – where the vehicle drives independently most of the time, but the driver must remain able to drive.
5 – Full Automation – where the people in the vehicle are only passengers, and do not need to be able to drive as the vehicle assumes all driving functions.
According to this categorisation, we are comfortably in phase 2 – Partly Automated Driving – at least in newer models of car. The technology for phase 3 is still being fine-tuned.
Driverless cars will be available in the near future, but it might be another 10 years or so, and even then, they will probably not be vehicles of ‘full automation’. We also don’t yet know how driverless cars will integrate with manual cars on the road, and how they will react to unpredictable driver behaviour and traffic laws.
However, with huge companies like Tesla intent on developing driverless car technology, these uncertainties will eventually be addressed, and it won’t be too long before we see the first driverless cars on the motorways.