In the bid to take technology to the next level, AI is being incorporated into real-time systems to solve existing problems like real-time assessment and monitoring in the defence sector or innovations like self-driving cars, collaborative robotics and ultrasonic imaging.
However, pre and post-processing of input data need to be done apart from AI inference. Fixed architectural devices like GPUs are great at AI inference, but their performance does not match up with the pre and post-processing speed. Also, the creation of a custom data path can be an option to bypass external memory access. For such real-time systems, ACAP like Xilinx Versal AI edge allows the creation of custom data paths to accelerate both AI and non-AI processing functions. They offer the efficiency benefits of custom ASICs without expensive design cycles.
The below image represents the tasks performed by each element in Versal AI edge for automated driving.
Specifications of the Core
Xilinx Versal AI Edge is an SoC with scalar engines, adaptable engines, intelligent engines and adaptive hardware. The scalar engine is responsible for actuator controls, cyber-security, safety controls, and user interfaces. It has a dual-core Arm Cortex-A72 application processing unit, a dual-core Arm Cortex-R5F real-time processor, and 256KB w/ECC memory. Also, an additional feature is the accelerator RAM with 4MB of on-chip memory. It is accessible to all compute engines and removes the need for external memory for AI inference.
The adaptable engine, consisting of system logic cells and LUTs, accelerates pre and post-data processing across the pipeline. They are capable of both parallelism and determinism and can integrate any sensor, connectivity to any interface and flexibility to handle any workload. It uses DFx technology which allows functionality swap in milliseconds by utilizing ROM chips external to the ACAP.
The company explains that “Versal AI Edge series delivers over 4 times AI performance per unit watt vs leading GPUs for power- and thermally-constrained edge applications, accelerating the whole application from the sensor to AI to real-time control”
Intelligent Engines, consisting of AI-engine ML, AI-engine and DSP engine tiles, enables dynamic executions and predictive maintenance. AI Engines are based on a scalable array of vector processors and distributed memory. DSP Engines are based on the proven slice architecture in previous-generation Zynq adaptive SoCs with integrated floating-point support. They are responsible for data analytics and predictive model predictive control. Overall, intelligent engines are responsible for perception, object classification and path planning.
Versal AI Edge also has a programmable I/O. Therefore sensors, memory, network connectivity or budget device pins can be configured on the same I/O port as needed. They are available in different versions ranging from V2002 to V2802.
Summary of Versal AI Edge series
One of the main advantages of the Versal AI Edge series is the ability to accelerate the processing of both the AI and the non-AI processing functions by allowing custom data paths to be built. They can also quickly and efficiently implement the latest AI models because the hardware can be easily reconfigured. Hence some of their applications include automated driving, collaborative robotics, UAVs and multi-mission payloads, and ultrasonic imaging.
For more details, please visit Versal AI Edge Series.

Hrushikesh Vazurkar is an alumni of VNIT Nagpur, CSE Department. He is passionate about competitive programming, web development and AI.
One Response