-

Helm.ai Introduces VidGen-2: Generative AI for Higher Resolution and Enhanced Realism Multi-Camera Video for Autonomous Driving

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced the launch of VidGen-2, its next-generation generative AI model for producing highly realistic driving video sequences. VidGen-2 offers 2X higher resolution than its predecessor, VidGen-1, improved realism at 30 frames per second, and multi-camera support with 2X increased resolution per camera, providing automakers with a scalable and cost-effective solution for autonomous driving development and validation.

Trained on thousands of hours of diverse driving footage using NVIDIA H100 Tensor Core GPUs, VidGen-2 leverages Helm.ai’s innovative generative deep neural network (DNN) architectures and Deep Teaching™, an efficient unsupervised training method. It generates highly realistic video sequences at 696 x 696 resolution, double that of VidGen-1, with frame rates ranging from 5 to 30 fps. The model also enhances 640 x 384 resolution video quality at 30 fps, delivering smoother and more detailed simulations. Videos can be generated by VidGen-2 without an input prompt or with a single image or input video as the prompt.

VidGen-2 also supports multi-camera views, generating footage from three cameras at 640 x 384 (VGA) resolution for each. The model ensures self-consistency across all camera perspectives, providing accurate simulation for various sensor configurations.

The model generates driving scene videos across multiple geographies, camera types, and vehicle perspectives. The model not only produces highly realistic appearances and temporally consistent object motion, but also learns and reproduces human-like driving behaviors, simulating the motions of the ego-vehicle and surrounding agents in accordance with traffic rules. It creates a wide range of scenarios, including highway and urban driving, multiple vehicle types, pedestrians, cyclists, intersections, turns, weather conditions, and lighting variations. In multi-camera mode, the scenes are generated consistently across all perspectives.

VidGen-2 gives automakers a significant scalability advantage over traditional non-AI simulators by enabling rapid asset generation and imbuing agents in simulations with sophisticated, real-life behaviors. Helm.ai’s approach not only reduces development time and cost but also closes the “sim-to-real” gap, offering a highly realistic and efficient solution that broadens the scope of simulation-based training and validation.

“The latest enhancements in VidGen-2 are designed to meet the complex needs of automakers developing autonomous driving technologies,” said Vladislav Voroninski, Helm.ai’s CEO and founder. “These advancements enable us to generate highly realistic driving scenarios while ensuring compatibility with a wide variety of automotive sensor stacks. The improvements made in VidGen-2 will also support advancements in our other foundation models, accelerating future developments across autonomous driving and robotics automation.”

About Helm.ai

Helm.ai develops next-generation AI software for ADAS, autonomous driving, and robotics automation. Founded in 2016 and headquartered in Redwood City, CA, the company reimagines AI software development to make scalable autonomous driving a reality. Helm.ai offers full-stack realtime AI solutions, including deep neural networks for highway and urban driving, end-to-end autonomous systems, and development and validation tools powered by Deep Teaching™ and generative AI. The company collaborates with global automakers on production-bound projects. For more information on Helm.ai, including products, SDK, and career opportunities, visit https://helm.ai or follow Helm.ai on LinkedIn.

Helm.ai


Release Versions

More News From Helm.ai

Helm.ai and Honda Motor Co. Agree to Multi-Year ADAS Joint Development for Mass Production Consumer Vehicles

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of autonomous driving AI software, today announced a multi-year joint development agreement with Honda Motor Co., Ltd. Through this collaboration, the two companies will accelerate the development of Honda’s next-generation self-driving capabilities, including its Navigate on Autopilot (NOA) platform. The partnership centers on Advanced Driver Assistance Systems (ADAS) for production consumer cars, leveraging Helm.ai’s full stac...

Helm.ai Announces Level 3 Urban Perception System With ISO 26262 Components

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced Helm.ai Vision, a production-grade urban perception system designed for Level 2+ and Level 3 autonomous driving in mass-market vehicles. Helm.ai Vision delivers accurate, reliable, and comprehensive perception, providing automakers with a scalable and cost-effective solution for urban driving. Assessed by UL Solutions, Helm.ai...

Helm.ai Introduces Helm.ai Driver, Vision-Only Real-Time Path Prediction Neural Network for Urban Driving

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today introduced Helm.ai Driver, a real-time deep neural network (DNN) transformer based path prediction system for highway and urban Level 2 to Level 4 autonomous driving. The company demonstrates the model’s path prediction capabilities in a closed-loop simulation environment using its proprietary generative AI foundation model, GenSim-2, t...
Back to Newsroom