10 09

One of the goals of machine learning during the autonomous vehicle (AV) development phase should be to specify sensor suites that provide actionable data at the right time with the optimal level of complexity – to enable timely and efficient decision making and driving decisions.

A previous article argued that AV players (like Waymo, Uber, Aurora, Cruise, Argo, Yandex) chose to control and own LiDAR sensor technology to ensure tighter coupling with the AI software stack.

The basic idea is to use camera and pixel architectures that detect changes in light intensity over a threshold (an event) and providing only this data to the compute stack for further processing.

LiDAR company promotes IDAR™ (Intelligent Detection and Ranging) – using Time of Flight (ToF) techniques to extract depth and intensity information in the scene.

These decisions are guided by information from the LiDAR itself or other sensors like a high-resolution camera, and intelligence (the “I” in the IDAR™).

Outsight’s real-time software works for processing past and current data raw point clouds intelligently at the edge to generate a semantic understanding of the scene.

The basis of the semantic information that supports the short-term decision making is a SLAM (Simultaneous Location and Mapping) on-chip approach which uses the past and present raw point cloud data to create relevant and actionable point-clouds and object detection.

Add your comment