In my first column titled “Why We Can’t Live Without Autonomous Vehicles” I shared a number of reasons why semi-autonomous and autonomous vehicles are being developed and why they will ultimately save us from ourselves. In this column I’ll explain the basics on how autonomous technology works and how it is being further developed.
There are three main components to every autonomous vehicle; cameras, radar, and lasers which are also referred to as ‘LiDaR’, which stands for Light Detection and Ranging or as a portmanteau of “light” and “radar”.
The cameras function as the eyes with two or even three cameras mounted to the front and rear of each vehicle. Two cameras mounted a short distant apart allow them to judge distance, no different than the function of our own eyes.
Radar acts by sending out pulses of high-frequency electromagnetic waves that are reflected off the object back to the source. Radar helps collect information regarding the presence, direction, distance, and speed of an object.
Lidar, which you’re probably the least familiar with, acts much like radar, but uses lasers instead of waves to measure distance by illuminating a target with a laser light. Lidar systems can take more than 1 million separate readings per second.
Each of these detection technologies has different ranges and fields of view and each serves a different, yet redundant, purpose. When the data collected by each of these individual components is combined, a detailed 3-D map is created that then allows the vehicles to navigate. This data allows the vehicle to steer itself down a road or highway, change lanes, and adjusts speed in response to objects ahead or even traffic.
Another element of the autonomous vehicle is the computer inside it. The vehicle you drive today has about the same functionality and memory as the computer you have at home or at work. For autonomous vehicles, the computers need to be hundreds or even thousands of times more powerful to do all the processing, to merge all the information coming from the cameras, radar, and lidar. How much data you ask? Intel has estimated that “by 2020, the average autonomous car will process upwards of 4,000 gigabytes of data per day, while the average internet user will process 1.5 gigabytes,” meaning that one single autonomous car will produce the same amount of data in a day as a 2,666 internet users. Now do you understand why everyone is talking about investing in cloud computing?
All of the data processed by these sensors is then fed back via the cloud, in real time, to a central hub so that each of the other autonomous vehicles can learn from it. This is called ‘fleet learning’ and it is being truly perfected by the folks at Tesla. The data collected from each individual vehicle is then aggregated into maps that let the central hub see the precise paths the cars take and don’t take. This information is then shared across the entire fleet of vehicles, so when one vehicle in the fleet travels down a particular road, they all in effect have traveled down that same road. These cars are constantly learning, recording, and sharing where they do and don’t actually travel.
Fleet-learning exponentially increases the curve and capabilities of each individual vehicle. As Elon Musk, the CEO of Tesla, said last month, “As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing.”
Besides sharing real-time information through the cloud, autonomous vehicles will also have vehicle-to-vehicle (V2V) communication capabilities. Transponders on each vehicle will broadcast to surrounding vehicles its speed, heading, and braking status to anyone or anything within a range of approximately 300 meters. V2V technology has already advanced enough to where it can see around corners and convey a driver’s intent. V2V communication alone might be the single most important tool to solving traffic congestion. Don’t believe me? Watch this quick video.
Lastly, you often hear autonomous cars classed by levels ranging from Level 1 to Level 4. Here is a quick explanation as handed National Highway Traffic Safety Administration:
- Automation Levels
- No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls – brake, steering, throttle, and motive power – at all times.
- Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
- Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
- Limited Self-Driving Automation (Level 3): (“hands off”) Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
- Full Self-Driving Automation (Level 4):< (“brain off”) The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles. (not even the presence of a steering wheel or brake pedal)
Now that you’ve passed Autonomous Vehicles 101, in my next column we’ll start to expand greatly into the who, what, when, where and why behind the first Level 3 and Level 4 autonomous vehicles you’ll likely see on the road. Hint: Commercial vehicles, 3-5yrs, everywhere, too profitable not too.
I’ll also dive into issues the local and state transportation policy, funding, major projects and where autonomous vehicles can and will deliver solutions. Here in 2016 we celebrate the 60th Anniversary of President Eisenhower’s ‘Federal Aid Highway Act’, and since its inception in 1956, the nation’s focus has been all about building more and more roads. That age based around our access to, and quantity of, roads is over, and a new era focused on simply the quality of our existing roads is now upon us.