DRIVE PX can fuse data from 12 cameras, as well as lidar, radar, and ultrasonic sensors. This allows algorithms to accurately understand the full 360 degree environment around the car to produce a robust representation, including static and dynamic objects. Use of Deep Neural Networks (DNN) for the detection and classification of objects dramatically increases the accuracy of the resulting fused sensor data.
DRIVE PX platforms are built around deep learning and include a powerful framework (Caffe) to run DNN models designed and trained on NVIDIA DIGITS™. DRIVE PX also includes an advanced computer vision (CV) library and primitives. Together, these technologies deliver an impressive combination of detection and tracking.
DRIVE PX features a surround view solution that delivers a seamless 360-degree view around the car. It captures, processes, and stitches together multiple HD camera inputs. Then, DRIVE PX uses sophisticated structure-from-motion (SFM) and advanced stitching for better image rendering and reduced “ghosting”, such as where a line on the pavement can appear in two places at once.
- Kernel version 4.4.38 kernel with RT_PREEMPT patches
- Support for 64-bit user space and runtime libraries
- NvMedia APIs for HW accelerated multimedia and camera input processing
- NVIDIA CUDA 8.0 parallel computing platform
- Graphics APIs:
- OpenGL 4.5
- OpenGL ES 3.2
- EGL 1.5 with EGLStream extensions
- Ubuntu 16.04 LTS target Root File System (RFS)
Powered by NVIDIA’s fastest SOCs and leveraging the same architecture as the world’s most powerful supercomputers, DRIVE PX enables self-driving applications to be developed faster and more accurately.
Key features of the platform include:
- Dual NVIDIA Tegra® Parker processors delivering a combined 24 DNN Teraflops
- Interfaces for up to 12 cameras, radar, lidar, GPS/IMU and ultrasonic sensors
- Rich middleware for graphics, computer vision, and deep learning
- Periodic software/OS updates