Teraki provides leading software for AI-based sensor data pre-processing at the edge. Stemming from quantum computing research, the company was founded in 2015 and is headquartered in Berlin, with offices in Japan. Teraki works with leading automotive and robotics OEMs to lower the costs of more reliable L2-L4 functionalities.
In a lightweight manner, Teraki software smartly and 10x more efficiently selects the relevant information of large amounts of sensor data (video, radar, and lidar/3D point cloud) on the edge, which demonstratively leads to 20% safer and more reliable autonomous vehicles (cars, robots, forklifts) without having to rely on expensive GPUs for such a performance increase. Teraki SDKs work on low-powered, automotive production-grade hardware that meets ASIL B and ASIL D requirements. By working on Synopsys’ ARC EV and ARC VPX DSP processors, Teraki improves the quality of data ingested into subsequent Machine Learning accelerators, which reduces the amount of data coming into the AI chipset and lowers the overall computational power required to process sensor data information.
Edge Sensor Processing SDKs
A smarter, efficient selection of information in optimized chip architectures brings higher accuracy for AI-models while reducing the computational power required to process sensor data, which ultimately delivers lower latencies for autonomous driving applications.
Teraki embedded device SDKs cover a wide range of sensor data: video, radar, and lidar.
- Average SDK RAM requirements are less than <1MB and increases CPU-efficiency with factor 10x.
- High performance at a low processing power (typically embedded ASIL CPU).
- Comparing with state-of-the-art products, this provides up to 10x to 150x better inference speeds and down to a 10-minute training process instead of a 3-day training process.
Teraki radar SDKs overcome the computational challenges of radar signals through cutting-edge AI, leading to higher accuracy, improved detections rates, and safer autonomous driving applications.
- Detecting 20% more objects and classifying 20% better (F1-scores).
- Lightweight (3KB) model able to run on low-powered hardware with less ethernet requirements (100-500Mbps instead of 1Gbps).
- Real-time processing performances (10-20fps) on Synopsys ARC processors.
Teraki’s processing pipeline enables up to 20% less degradation against standard processing pipelines (including FFT, quantization, and neural networks) and requires CPU processors – not GPUs. Customers are able to train ROI (Region of Interest) and TOI (Time of Interest) Models in camera SoCs. Customers can classify objects as sky, cars, person and faces in real time; and events (e.g., lane change) as well can be identified. The acquisition rate of videos can be dynamically adapted.
- Maintains high video quality while saving bandwidth (5x) and RAM (6.5x) compared to ffmpeg.
- Running on 8-bit integer HD data input, calculations on 8bit integer at 20fps with 0,5 TOPs.
- AI-models will deliver a 10-30% improved object detection and classification rate per frame.
3D Point Cloud SDK
Teraki algorithms ensure reduction of 3D point cloud data in combination with lightweight segmentation. The 3D point cloud SDKs support sensor fusion and SLAM.
- 10x -15x times faster than any open source (e.g., PCL + NN).
- Lightweight: Runs real-time on limited edge-CPU.
- Accuracy preserved with 96.5% IoU at 500 points per object.
ARC-specific Support Details
Teraki edge SDKs are compatible with the Synopsys ARC® EV and ARC VPX DSP Processors.
Learn more about how Teraki and Synopsys work together.