This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

MyGo: Consistent and Controllable Multi-View Driving Video Generation
with Camera Control

Yining Yao1, Xi Guo2, Chenjing Ding2, Wei Wu1,2 Corresponding author.
Abstract

High-quality driving video generation is crucial for providing training data for autonomous driving models. However, current generative models rarely focus on enhancing camera motion control under multi-view tasks, which is essential for driving video generation. Therefore, we propose MyGo, an end-to-end framework for video generation, introducing motion of onboard cameras as conditions to make progress in camera controllability and multi-view consistency. MyGo employs additional plug-in modules to inject camera parameters into the pre-trained video diffusion model, which retains the extensive knowledge of the pre-trained model as much as possible. Furthermore, we use epipolar constraints and neighbor view information during the generation process of each view to enhance spatial-temporal consistency. Experimental results show that MyGo has achieved state-of-the-art results in both general camera-controlled video generation and multi-view driving video generation tasks, which lays the foundation for more accurate environment simulation in autonomous driving. Project page: https://metadrivescape.github.io/papers˙project/MyGo/page.html

Refer to caption
Figure 1: Examples of the generated multi-view video frames. MyGo is capable of generating multi-view videos precisely controlled by onboard camera parameters as well as road structural information while maintaining excellent temporal consistency as well as cross-view long-term spatial consistency.