Helm.ai unveils GenSim-2, a GenAI model designed to enhance autonomous driving datasets by creating and modifying realistic video scenarios.
Helm.ai has introduced GenSim-2, its latest GenAI model aimed at enriching autonomous vehicle data. This updated model allows for advanced video editing features, such as dynamic weather and illumination adjustments, object modifications, and multi-camera support. These enhancements provide automakers with scalable, cost-effective tools to address complex corner cases in autonomous driving.
GenSim-2 builds upon its predecessor, GenSim-1, expanding on capabilities using Helm.ai’s Deep Teaching methodology and deep neural networks. This model enables the generation of highly realistic and customizable video data. By tailoring data to specific needs, automakers can develop robust autonomous systems more efficiently, bridging the gap between simulation and real-world conditions.
The model’s AI capabilities allow teams to adjust video data with realistic weather conditions like rain, fog, and snow, as well as various lighting scenarios including day and night. It can also alter object appearances, such as road conditions and vehicle types, maintaining consistency across multiple camera views. Helm.ai’s CEO, Vladislav Voroninski, emphasized that this level of video manipulation marks a significant advancement in GenAI simulation technology.
GenSim-2 offers alternatives to traditional, resource-heavy data collection methods by generating scenario-specific video data. This capability supports applications in autonomous driving development and validation across diverse geographies, resolving rare scenarios efficiently. Alongside GenSim-2, Helm.ai has released other AI models like VidGen-2, which enhances realism in predictive video sequences, and WorldGen-1, capable of simulating comprehensive driving environments. These tools collectively equip automakers with high-fidelity data generation, accelerating development timelines and reducing costs.