An ADAS vision system must handle imaging, high-speed serial communications, and downstream processing functions, which demands new, highly integrated architectures.
By Paul Pickering, Contributing Editor
According to the National Safety Council (NSC), almost 19,000 people died in motor-vehicle crashes in the U.S. in the first half of 2017—the death rate is around 1.2 deaths per million vehicle miles. The causes include speeding, driving while impaired, and texting while driving. Adding the 2.14 million people injured, the total costs—including medical expenses, lost wages and productivity, and property damage—exceed $191 billion over the same period.
To help improve road safety and satisfy increasingly stringent government regulations, automakers are adding a range of diverse technologies to their new models that help drivers to avoid accidents, both at high speeds and when backing up or parking (Fig. 1). These systems can be grouped into the category of advanced driver-assistance systems (ADAS). Beyond increasing safety, ADAS applications also improve comfort, convenience, and energy efficiency.
Typical ADAS features include blind-spot and lane-departure warning, forward collision and rear cross-traffic warning, automatic emergency breaking, lane-keep assist, and adaptive cruise control.
The Society of Automotive Engineers (SAE) has defined six levels of vehicle automation (Fig. 2). Level 0 has no automation, but driver assistance and ADAS play a key role as automation increases, culminating in Level 5—fully autonomous vehicles.
The Role of Vision Systems in ADAS
ADAS requires many functional blocks: sensors that capture information about the surrounding environment; integrated circuits (ICs) for communication; high-performance microprocessors (MPUs) or digital signal processors (DSPs) to analyze the sensor data; and microcontrollers (MCUs) to activate and control mechanical functions.
Machine vision and image processing is an important component in a set of complementary sensing technologies that includes ultrasonic, radar, and LIDAR. Each one has its place in a comprehensive ADAS design.
The vision component within ADAS is well-suited for detecting objects such as vehicles, pedestrians, or stationary obstacles, or detecting and reading traffic signs. Since an unexpected object or sign can appear at any time, the vision system must operate over the whole performance envelope of the vehicle, from the parking garage to the highway.
A number of tasks are best handled by other sensing technologies. Rain, snow, or fog pose problems for a vision system. So does a dirty or muddy environment, where the camera lens might become obscured.
Increasing Data Rates Require New Vision-System Architectures
ADAS requires a high level of performance from the vision system at all SAE levels beyond zero.
In an autonomous vehicle, the image quality must be high enough to decipher road signs and differentiate between objects and their surroundings. The bar is being raised for human-driven vehicles, too, with the advent of camera monitoring systems (CMS) that replace rear- and side-view mirrors with image sensors located around the vehicle. CMS benefits include eliminating blind spots and reducing glare. Removing the side mirrors also improves fuel efficiency by lowering air resistance.
Both of these applications use image sensors that can produce color images of 1 or 2 megapixels (MP) with update rates up to 60 frames per second (fps). A high-dynamic-range (HDR) sensor may be required to produce an image that that matches the quality of an optical mirror. Such a sensor often outputs multiple images for the same frame with different exposure values; these must then be combined to build up a composite HDR image. Compared to earlier generations, such new sensor architectures require significantly higher computational performance from image-signal-processing (ISP) units and application processors.
As advanced driver-assistance system (ADAS) applications such as front camera and camera-monitoring systems (CMS) use camera imagers above a 2MP resolution more widely, video data-transfer speed requirements increase. Read this blog to learn how FPD-Link III adds speed and flexibility.
ADAS Vision System Architecture
As shown in Fig. 1, assembling a 360-degree image requires transmitting data from an array of remote image sensors or cameras to a central core over a high-speed communication network.
This range of tasks demands a highly integrated design in which each functional block is optimized to maximize throughput. Figure 3 shows the block diagram of an ADAS vision system with four cameras. The figure also shows the direction of data, control, and power for an FPD-Link III implementation.
The design contains the following functional blocks: a serializer, a deserializer hub, an image signal processor, and an applications processor. Let’s review the operation of each of these blocks.
The serializer transmits the serialized image data, receives node power, and exchanges command information with the central unit.
Although some image sensors output data in parallel form, the most widely used sensor interface is the MIPI Alliance’s CSI-2. This standard can support a broad range of high-performance applications, including high-resolution photography and 1080p, 4K, or 8K video.
The serializer interfaces to the main control units using FPD-Link (FPD stands for flat-panel display): It’s a popular choice for networking high-speed digital video. FPD-Link III, the latest version, replaces the low-voltage differential-signaling (LVDS) technology used in earlier generations with current-mode logic (CML). This enables it to transmit data at rates greater than 3 Gb/s over cables of 10 meters or longer.
FPD-Link III is a power-over-coax (PoC) implementation. It bundles power, high-speed video, and low-speed bidirectional communication together on one coaxial cable. The bidirectional channel transfers control signals between source and destination. This reduces cost by eliminating separate cables for power and control.
The DS90UB953-Q1, for example, can support 2-Mpixel/60-fps or 4-Mpixel/30-fps CSI-2 cameras. The transceiver delivers a 4.16-Gb/s forward (video) channel, combined with an ultra-low-latency, 50-Mb/s bidirectional control channel. The DS90UB933-Q1 FPD-Link III serializer can accommodate a 1-Mpixel/60-fps or 2-Mpixel/30-fps camera with a 10- or 12-bit parallel interface running at 100 MHz. Figure 4 compares the sensor interfaces of the two devices.
The deserializer hub converts the FPD-Link III data streams from several serializers back into CSI-2 format; multiplexes them into one or more output data streams; supplies power to the serializers; provides frame synchronization via a master oscillator; and relays bidirectional command information. A deserializer performs the same functions for a single FPD-Link III data stream.
Figure 5 shows the DS90UB964-Q1 deserializer hub interfacing with four independent serializers and multiplexing the data into a single CSI-2 byte stream, allowed for the CSI-2 standard. The device features an adaptive equalizer that corrects for degradation of signal quality due to cable transmission losses.
When coupled with compatible serializers such as the DS90UB913A-Q1 or DS90UB933-Q1, the DS90UB964-Q1 receives and aggregates data from up to four 1-Mpixel image sensors supporting 720p/800p/960p resolution at 30 or 60 fps.
The device also includes a second CSI-2 output port to provide additional bandwidth or a replicated output. A replicated output can send the raw image data to a data logger for offline analysis, and the aggregated output from the deserializer is a convenient source.
Texas Instruments offer a broad portfolio of FPD-Link devices for ADAS vision applications. Figure 6 shows a breakdown of the different products and their feature sets.
Image Signal Processor
The image signal processor (ISP) processes the video data streams to improve picture quality. For example, the ISP combines images with different exposures to generate image frames and provides local tone scaling for HDR images that are pleasing to the eye, filling dark areas with long-exposure pixels and filling bright areas from either medium or short exposure pixels. As a result, the video stream has an extended dynamic range. Other ISP pixel-processing functions include defect correction, noise filtering, gamma correction, and geometric distortion correction.
As data rates increase, designers are separating the ISP block from the image sensor to limit the camera’s power dissipation and heat generation, and help improve image quality. The ISP then becomes a standalone device or is integrated into the vision processor. With advanced HDR capabilities, ISPs can process multiple camera streams concurrently, reducing the number of devices needed in the system.
With the evolution of ADAS towards autonomous driving, there’s an increasing need for multiple copies of aggregated video sensor data for machine vision, viewing, parallel processing and data logging. Learn more about logging sensor data to solve this challenge.
The final block in the vision processing chain is the applications processor, which analyzes and interprets the formatted data.
Texas Instruments has a large number of options to help in this task. The Jacinto TDAx system-on-chip (SoC) family, for example, is a scalable portfolio of devices for ADAS and other imaging applications, including night vision, multi-range radar, and sensor fusion systems.
TDAx family members (Fig. 7) include devices with both fixed and floating-point digital signal processors (DSPs), ARM Cortex-A15 or dual Cortex-M4 cores, and embedded-vision-engine (EVE) coprocessors optimized for vision processing.
There’s also a complete set of development tools for the ARM cores, DSP, and EVE, including C compilers, a DSP assembly optimizer, and a debugging interface.
Advanced driver assistance systems will assume increasing importance in coming years. The global ADAS market is growing at a CAGR of 23% and forecast to reach $60B by 2020, with vision systems making up a key segment.
An ADAS vision system requires a diverse mix of components to handle imaging, high-speed serial communications, and downstream processing functions. Increasing requirements and higher speeds are driving new, highly integrated vision architectures. Texas Instruments’ product portfolio and applications assistance help simplify the design of these complex systems. Read more about advanced architectures for automotive vision systems in this report.