The Adaptive Compute Acceleration Platform (ACAP)platform will utilize low-latency, power-efficient FPGA/SoC solutions.
By Murray Slovick, Contributing Editor
Xilinx and Daimler AG, parent company of Mercedes Benz, are collaborating on an in-car system using Xilinx technology—FPGAs, system-on-chip (SoC) devices, and acceleration software–for artificial-intelligence (AI) processing in automotive applications.
“Through this strategic collaboration, Xilinx is providing technology that will enable us to deliver very low latency and power-efficient solutions for vehicle systems which must operate in thermally constrained environments,” says Georges Massing, Director, User Interaction & Software, Daimler AG.
As part of the strategic collaboration, deep-learning experts from the Mercedes-Benz Research and Development centers in Sindelfingen, Germany and Bangalore, India will be implementing their AI algorithms on what’s described by Willard Tu, Senior Director for Automotive at Xilinx, as a “highly adaptable automotive platform” that offers “a high level of flexibility for innovation in deploying neural networks for intelligent vehicle systems.”
A neural network is a computer system modeled on the human brain and nervous system. In a neural network, input values held in a series of layers are multiplied by weighting values that push data on to the next layer.
Automakers like an FPGA platform’s scalability and adaptability over time, such as the ability to upgrade hardware features over the air (OTA). Earlier this year, Xilinx launched its Adaptive Compute Acceleration Platform (ACAP), which is a highly integrated multicore heterogeneous compute platform that can be changed at the hardware level to adapt to the needs of different applications and workloads. ACAP’s adaptability, which can be done dynamically during operation, delivers levels of performance and performance per-watt that the company says are unmatched by CPUs or GPUs.
Xilinx’s ACAP is well-suited to accelerate recent applications that have emerged in an era of big data and artificial intelligence. These include video transcoding, database, data compression, search, AI inference, machine vision, computational storage, and network acceleration.
Software and hardware developers can design ACAP-based products for end-point, edge, and cloud applications. The first ACAP product family, codenamed “Everest,” will be developed in TSMC 7-nm process technology.
ACAP has at its core a new generation of FPGA fabric with distributed memory and hardware-programmable DSP blocks, a multicore SoC, and one or more software-programmable, yet hardware-adaptable, compute engines—all connected through a network on chip (NoC). It also features integrated, programmable I/O functionality ranging from hardware-programmable memory controllers, advanced SerDes technology, and leading-edge RF-ADC/DACs, to integrated high bandwidth memory (HBM) depending on the device variant.
According to Xilinx, software developers will be able to target ACAP-based systems using tools like C/C++, OpenCL, and Python. An ACAP can also be programmable at the RTL level using FPGA tools.
Xilinx claims that “Everest” is expected to achieve 20X performance improvement on deep neural networks compared to today’s latest 16-nm Virtex VU9P FPGA. ACAP has been under development for four years at an accumulated R&D investment of over one billion dollars (USD). There are currently more than 1,500 hardware and software engineers at Xilinx designing ACAP and Everest. Software tools have been delivered to key customers; Everest will tape out in 2018, with customer shipments coming in 2019.