-

Helm.ai Breaks the "Data Wall": Achieves Vision-Only Zero-Shot Autonomous Steering with Just 1,000 Hours of Driving Data

New "Factored Embodied AI" framework enables production grade autonomous steering in complex urban environments with orders of magnitude less data than industry standards.

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leader in AI software for ADAS, L4 autonomous driving, and robotics automation, today unveiled Factored Embodied AI, a new architectural framework designed to break the “Data Wall” currently stalling the autonomous vehicle industry.

While the industry races to build massive black-box, “end-to-end” models that require petabytes of data to learn driving physics from scratch, Helm.ai has demonstrated a scalable alternative. Today, the company released a benchmark demonstration of its vision-only AI Driver steering the complex streets of Torrance, CA, with zero-shot success—handling lane keeping, lane changes, and turns at urban intersections without ever having seen those specific streets before. See the AI Driver's zero-shot capabilities in action during a continuous 20-minute drive without steering disengagement here: www.helm.ai/zeroshot-autonomous-steering.

Critically, this autonomous steering capability was achieved by training the AI using simulation and only 1,000 hours of real-world driving data—a fraction of the data required by monolithic end-to-end approaches.

"The autonomous driving industry is hitting a point of diminishing returns. As models get better, the data required to improve them becomes exponentially rarer and more expensive to collect," said Vladislav Voroninski, CEO and Founder of Helm.ai. "We are breaking this 'Data Wall' by factoring the driving task. Instead of trying to learn physics from raw, noisy pixels, our Geometric Reasoning Engine extracts the clean 3D structure of the world first. This allows us to train the vehicle's decision-making logic in simulation with unprecedented efficiency, mimicking how a human teenager learns to drive in weeks rather than years."

The new architecture breaks the industry's efficiency barrier through several key technological advancements:

  • Bridging the Simulator Gap: Unlike traditional models that struggle to apply training in simulation to the real world due to visual differences, Helm.ai’s architecture trains in "Semantic Space"—a simplified view of the world that focuses on geometry and logic rather than graphics. By simulating the structure of the road rather than just the pixels, the company can train on infinite simulated data that works immediately in the real world.
  • The 1,000-Hour Benchmark: Leveraging this geometric simulation, Helm.ai’s planner achieved robust, zero-shot urban autonomous steering using only 1,000 hours of real-world fine-tuning data, offering a capital-efficient path to fully autonomous driving.
  • Behavioral Modeling: To tackle acceleration, braking, and complex interactions, Helm.ai is leveraging its World Model capabilities to predict the intent of pedestrians and other vehicles, enabling safe navigation through dense traffic.
  • Universal Perception: To validate the robustness of its perception layer, Helm.ai deployed its automotive software into an Open-Pit Mine. With extreme data efficiency, the system correctly identified drivable surfaces and obstacles, proving the architecture can adapt to any robotics environment, not just roads.

This architecture offers a critical strategic advantage for automakers. While competitors rely on massive existing fleets to collect training data, Helm.ai’s approach empowers automakers to deploy ADAS through L4 capabilities using their existing development fleets, bypassing the prohibitive data barrier to entry.

"We are moving from the era of brute force data collection to the era of Data Efficiency," added Voroninski. "Whether on a highway in LA or a haul road in a mine, the laws of geometry remain constant. Our architecture solves this universal geometry once, allowing us to deploy autonomy everywhere."

About Helm.ai

Helm.ai develops AI software for L2/L3 ADAS, L4 autonomous driving, and robotics automation. Founded in 2016, the company delivers full-stack driving software for on-car deployment and simulation tools powered by Deep Teaching™ and generative AI. Helm.ai partners with global automakers on production-bound programs.

Contacts

Media Contact
press@helm.ai

Helm.ai


Release Versions

Contacts

Media Contact
press@helm.ai

More News From Helm.ai

Helm.ai and Honda Motor Co. Agree to Multi-Year ADAS Joint Development for Mass Production Consumer Vehicles

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of autonomous driving AI software, today announced a multi-year joint development agreement with Honda Motor Co., Ltd. Through this collaboration, the two companies will accelerate the development of Honda’s next-generation self-driving capabilities, including its Navigate on Autopilot (NOA) platform. The partnership centers on Advanced Driver Assistance Systems (ADAS) for production consumer cars, leveraging Helm.ai’s full stac...

Helm.ai Announces Level 3 Urban Perception System With ISO 26262 Components

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced Helm.ai Vision, a production-grade urban perception system designed for Level 2+ and Level 3 autonomous driving in mass-market vehicles. Helm.ai Vision delivers accurate, reliable, and comprehensive perception, providing automakers with a scalable and cost-effective solution for urban driving. Assessed by UL Solutions, Helm.ai...

Helm.ai Introduces Helm.ai Driver, Vision-Only Real-Time Path Prediction Neural Network for Urban Driving

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today introduced Helm.ai Driver, a real-time deep neural network (DNN) transformer based path prediction system for highway and urban Level 2 to Level 4 autonomous driving. The company demonstrates the model’s path prediction capabilities in a closed-loop simulation environment using its proprietary generative AI foundation model, GenSim-2, t...
Back to Newsroom