Buckle Up for New Roads Ahead: Product Liability and Autonomous Vehicles

Adam Fogarty, M.S., P.E.
SEA, Ltd., Rolling Meadows

As technology marches forward, many of the tasks that people were once burdened with are being addressed by sensors, circuit boards, and processors. The rate of information transfer from Point A to Point B is ever accelerating, allowing for incredible paradigm shifts to occur across several industries.

One such shift is the approach towards the goal of self-driving vehicles, which has garnered much attention recently due to many hardware and software advancements that have allowed this objective to become much more realizable. One such advancement is the implementation of Advanced Driver Assistance Systems (ADAS), which perform tasks such as adaptive cruise control, lane keep assist, forward collision warning and blind spot monitoring.

Since a variety of sensors and processors are used to control ADAS systems, manufacturers are able to apply similar hardware and object recognition techniques towards the target of self-driving. Although there are differences in the ways each manufacturer constructs and implements their ADAS systems, there are certain sensors and devices that are commonly used. For a vehicle to drive itself, it must first be able to detect the world around it. To do this, devices such as ultrasonic sensors, RADARs, LIDARs, and cameras, are being used in unique ways.

Ultrasonic sensors rely on high-frequency sound waves that are sent and received by each sensor. Because sound waves are the method of detection, the color of a detected object is irrelevant, and the absence of light is not a hindrance. Such sensors are short range and can be small and lightweight. Therefore, they are commonly built into the front and rear fenders of vehicles for close-range object detection. This can allow for park-assist features that warn drivers when they are too close to large objects, such as a nearby car or wall.

RADAR sensors use electro-magnetic (EMF) waves for object detection. Both short-range and long-range RADAR systems can be used with effective ranges beyond 500 feet, unlike the shorter ranges of ultrasonic sensors. However, forward-facing RADAR units are similar in that they can be implemented on the front fender of a vehicle. Their longer range of detection can allow for features such as adaptive cruise control, forward collision warning, and emergency braking.

LIDAR systems use a rotating, pulsed laser for object detection. Since a laser is used as the detection method, LIDAR has a long effective range, and can operate in the absence of daylight. LIDAR systems can measure the distance of objects with precision and accuracy and can produce a high-resolution image of the environment surrounding the unit. However, since the laser needs to rotate and sweep across the landscape, time is required for a complete image to form. Traditionally, it has been difficult to equip vehicles with LIDAR systems due to their size and cost. However, newer generations of LIDAR systems reduce the cost, size, and latency of information collection.

Cameras are devices seen on many consumer products across the globe. Due to their popularity, the acceleration of camera technology has been incredibly fast. They are smaller, cheaper, and higher resolution than ever before. Cameras can be implemented on several different parts of a vehicle, assuming they have an unobstructed view. With several cameras facing different directions, a full 360-degree vision envelope can allow a vehicle system to see in all directions at all times. Additionally, cameras can utilize different vision spectrums, such as infrared for more robust nighttime operation. However, one challenge lies in the fact that cameras do not measure distance from the objects observed.

By using combinations of ultrasonic sensors, RADARs, LIDARs, and cameras (among other components), vehicles can detect much of their surroundings in real time. With a variety of object detection methods, the strengths of one system can make up for the weaknesses of another. For instance, since some cameras rely on light to observe an object, nighttime driving or fog may become a challenge for that camera system. Despite this, if a different sensor (such as a RADAR) is facing the same direction at the same time, the RADAR may be more effective than the camera.

Each sensor collects data that must be processed and analyzed in order for a vehicle to recognize the environment. For instance, a camera records images in the form of a matrix of pixels, each of which carries a value that corresponds with a specific color. Therefore, rather than seeing objects directly, software systems are instead given long arrays of values. One technique used to decipher and analyze these arrays is known as a neural network.

Neural networks are a software strategy inspired by the workings of the human brain. Rather than being instructed directly, they are instead trained using immense amounts of data. For this training to take place, an “answer sheet” must be created. Since humans are naturally able to recognize objects, they are used to create each answer sheet. For instance, a person can be shown a group of images, then they can select which images contain pictures of objects, such as cars, bicycles, road signs, lane lines, etc.

Once an answer sheet is created, neural network training can take place. First, a neural network can begin as a blank page attempt to answer a question. The structure of the network is a series of nodes that are connected to each other, similar to the network of neurons in a human brain. Once the network is shown the answer to the question it is seeking to answer, the network pathways that relate to the correct answer are strengthened. If this process is repeated, the network will become more effective at answering the type of question it was trained to answer.

If a neural network is being trained to recognize objects for the purpose of driving a vehicle, it is important to remember that the real world is full of unique circumstances which may fall outside of the trained neural network. No two situations are the same, and random encounters are not uncommon. Therefore, when training a neural network for self-driving, answers must be provided that are as diverse as the real world. For example, if a network is shown nothing but straight, empty roadways, it will only see the world as a series of straight, empty roads. Instead, diverse, realistic data incorporating things like twists, turns, hills, angles, trucks, cars, bicycles, pedestrians, etc. should be included.

Once a network can recognize objects, it is critical to assign distance values to them. If a system has sensors, such as LIDAR and RADAR, these can be used to measure objects that are recognized. However, it is possible for a system of cameras to calculate, rather than measure distances to recognized objects.

One method for this is known as binocular vision. In sum, multiple camera perspectives of the same object can be compared to each other to calculate its distance. This is how humans determine the distances of objects that they see, as they have two eyes viewing objects from unique perspectives.

Another distance determination method relies on a single camera perspective, but requires that objects move in order for their distance to be approximated. The same strategy is used by some animals to perceive the distance of predators in nature.

Once a vehicle can recognize its surroundings, it must also recognize where it is currently located. A Global Positioning System (GPS) can directly measure the location of a vehicle. Additionally, this location can be traced over time to create known pathways of where vehicles may travel. Therefore, previous driver pathways can be used to predict where vehicles should be traveling.

Additionally, if a particular neighborhood or roadway has been well documented by the GPS tracking of other vehicles (or LIDAR equipped test vehicles), the area may be an acceptable area of travel for a self-driving car. Therefore, GPS can be used to restrict areas of autonomous driving to only certain areas. This technique is known as “Geofencing.”

No system is impervious to disturbance, and the hazards of driving are broad and diverse. One type of disturbance comes in the form of sensor blockage. Whether it be snow, mud, or even large insects, it is possible for RADARs, LIDARs, or cameras to get blocked momentarily. Therefore, it is critical for an autonomous vehicle to be well equipped with a diverse array of overlapping sensors, so one sensor can make up for another in times of need. More robust sensor suites with stronger overlap can eventually circumvent the challenge of sensor blockage.

In any vehicle accident, it is important to understand if a vehicle malfunction contributed to or caused the accident. Similarly, in accidents involving ADAS equipped vehicles, an evaluation of the many systems, sensors, and components will be important in understanding crash causation. Therefore, it is beneficial if a vehicle records electronic data from its sensors and specifies whether or not the ADAS technology was in control of the vehicle at the time of the crash.

Full self-driving technology will not be implemented immediately, so it is important to remember that such features are being added piece by piece as ADAS, such as adaptive cruise control, blind spot monitoring, forward collision warning, active lane keep (a technique used to keep vehicles from drifting out of their lane), and self-driving operation only when the driver is engaged and paying attention. Therefore, drivers can become more familiar with self-driving techniques as they evolve.

However, it cannot be assumed that pedestrians or other drivers are familiar with the specific features of a given vehicle. With this in mind, it is difficult to anticipate how pedestrians and other drivers will react to the actions of an autonomous vehicle. For instance, if an autonomous vehicle and a human-operated vehicle are attempting to park in the same spot, there may be miscommunications that occur as one vehicle tries to anticipate the decisions of the other.

Regardless, many companies are chasing this goal of self-driving, which will increasingly be influenced by artificial intelligence. With multiple sensor redundancy and ever-improving processing times, autonomous vehicles have immense potential, and have already demonstrated themselves to be fully capable of safe operations under certain conditions. As more vehicle automation takes place, more data can be fed into neural networks, creating ever-improving vehicle systems that may ultimately surpass the capabilities of human-operated vehicles. The legal, ethical, and technical challenges will be pervasive, but certainly there are many safety advantages to consider as autonomous vehicles become more prevalent in the future. - (SEA Ltd.)