-

Helm.ai Introduces WorldGen-1, a First of Its Kind Multi-sensor Generative AI Foundation Model for Autonomous Driving

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of AI software for high-end ADAS, Level 4 autonomous driving, and robotics, today announced the launch of a multi-sensor generative AI foundation model for simulating the entire autonomous vehicle stack. WorldGen-1 synthesizes highly realistic sensor and perception data across multiple modalities and perspectives simultaneously, extrapolates sensor data from one modality to another, and predicts the behavior of the ego-vehicle and other agents in the driving environment. These AI-based simulation capabilities streamline the development and validation of autonomous driving systems.

Leveraging innovation in generative DNN architectures and Deep Teaching, a highly efficient unsupervised training technology, WorldGen-1 is trained on thousands of hours of diverse driving data, covering every layer of the autonomous driving stack including vision, perception, lidar, and odometry.

WorldGen-1 simultaneously generates highly realistic sensor data for surround-view cameras, semantic segmentation at the perception layer, lidar front-view, lidar bird’s-eye-view, and the ego-vehicle path in physical coordinates. By generating sensor, perception, and path data consistently across the entire AV stack, WorldGen-1 accurately replicates potential real-world situations from the perspective of the self-driving vehicle. This comprehensive sensor simulation capability enables the generation of high-fidelity multi-sensor labeled data to resolve and validate a myriad of challenging corner cases.

Furthermore, WorldGen-1 can extrapolate from real camera data to multiple other modalities, including semantic segmentation, lidar front-view, lidar bird’s-eye-view, and the path of the ego vehicle. This capability allows for the augmentation of existing camera-only datasets into synthetic multi-sensor datasets, increasing the richness of camera-only datasets and reducing data collection costs.

Beyond sensor simulation and extrapolation, WorldGen-1 can predict, based on an observed input sequence, the behaviors of pedestrians, vehicles, and the ego-vehicle in relation to the surrounding environment, generating realistic temporal sequences up to minutes in length. This enables AI-generation of a wide range of potential scenarios, including rare corner cases. WorldGen-1 can model multiple potential outcomes based on observed input data, demonstrating its ability for advanced multi-agent planning and prediction. WorldGen-1’s understanding of the driving environment and its predictive capability make it a valuable tool for intent prediction and path planning, both as a means of development and validation, as well as the core technology that makes real-time driving decisions.

“Combining innovation in generative AI architectures with our Deep Teaching technology yields a highly scalable and capital-efficient form of generative AI. With WorldGen-1, we’re taking another step towards closing the sim-to-real gap for autonomous driving, which is the key to streamlining and unifying the development and validation of high-end ADAS and L4 systems. We’re providing automakers with a tool to accelerate development, improve safety, and dramatically reduce the gap between simulation and real-world testing,” said Helm.ai’s CEO and Co-Founder, Vladislav Voroninski.

“Generating data from WorldGen-1 is like creating a vast collection of diverse digital siblings of real-world driving environments at the level of richness of the full AV sensor stack, replete with smart agents that think and predict like humans, enabling us to tackle the most complex challenges in autonomous driving,” added Voroninski.

About Helm.ai

Helm.ai is developing the next generation of AI software for high-end ADAS, Level 4 autonomous driving, and robotic automation. Founded in 2016 and headquartered in Redwood City, CA, the company has re-envisioned the approach to AI software development, aiming to make truly scalable autonomous driving a reality. For more information on Helm.ai, including its products, SDK, and open career opportunities, visit https://www.helm.ai/ or find Helm.ai on LinkedIn.

Contacts

Media Contact
press@helm.ai

Helm.ai


Release Versions

Contacts

Media Contact
press@helm.ai

More News From Helm.ai

Helm.ai and Honda Motor Co. Agree to Multi-Year ADAS Joint Development for Mass Production Consumer Vehicles

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of autonomous driving AI software, today announced a multi-year joint development agreement with Honda Motor Co., Ltd. Through this collaboration, the two companies will accelerate the development of Honda’s next-generation self-driving capabilities, including its Navigate on Autopilot (NOA) platform. The partnership centers on Advanced Driver Assistance Systems (ADAS) for production consumer cars, leveraging Helm.ai’s full stac...

Helm.ai Announces Level 3 Urban Perception System With ISO 26262 Components

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced Helm.ai Vision, a production-grade urban perception system designed for Level 2+ and Level 3 autonomous driving in mass-market vehicles. Helm.ai Vision delivers accurate, reliable, and comprehensive perception, providing automakers with a scalable and cost-effective solution for urban driving. Assessed by UL Solutions, Helm.ai...

Helm.ai Introduces Helm.ai Driver, Vision-Only Real-Time Path Prediction Neural Network for Urban Driving

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today introduced Helm.ai Driver, a real-time deep neural network (DNN) transformer based path prediction system for highway and urban Level 2 to Level 4 autonomous driving. The company demonstrates the model’s path prediction capabilities in a closed-loop simulation environment using its proprietary generative AI foundation model, GenSim-2, t...
Back to Newsroom