AutoWeather4D: Autonomous Driving Video Weather Conversion via G-Buffer Dual-Pass Editing

1 Hong Kong University of Science and Technology (HKUST)
2 Xiamen University
3 Meituan-M17, Hong Kong

*Indicates Equal Contribution
Indicates Corresponding Author

AutoWeather4D: Multi-Dimensional Control for Weather and Time-of-Day Simulation in Autonomous Driving Videos.

Abstract

Weather variation is critical for robust testing and validation of autonomous driving systems. Recent progress in generative editing has enabled weather manipulation in driving videos, but it often suffers from two key limitations. First, they exhibit physical inconsistency, such as unnatural light propagation and implausible dynamics of water or particles. Second, they offer limited control over fine-grained weather properties. To address these challenges, we propose AutoWeather4D, a novel framework that introduces a G-Buffer dual-pass editing mechanism: a Geometry Pass models interactions between weather elements and scene geometry, while a Light Pass adjusts illumination for day-night and weather-induced lighting changes. This dual-pass mechanism enables fine-grained control and generates physically plausible transitions. Experiments show that AutoWeather4D outperforms existing baselines in physical realism by avoiding ``impossible videos" and in controllability, generating diverse weather variations that closely align with real-world physics. This work advances autonomous driving simulation by providing a reliable tool for testing system robustness under complex weather conditions.

Input (Original)

Pipeline and Methodology

AutoWeather4D Pipeline

Overview of our framework. Our framework enables physically-based video editing for multi-weather and time-of-day synthesis from an input video. We first extract G-buffers from the input video: depth is derived from point clouds via feed-forward 4D reconstruction, while other G-buffers (Normal, Metallic, Albedo, Roughness) are obtained through an Inverse Renderer. The core editing lies in the G-Buffer Dual-pass Editing mechanism: (1) Geometry Pass Editing modifies albedo, normal, and roughness to add weather-related elements (e.g., snow, rain, wet ground) following physical laws; (2) Light Pass Editing adjusts local illuminants (by detected local light sources) and environmental lighting (e.g., dawn, noon, blue hours) to align with time-of-day and weather-induced light changes. Finally, the outputs are fed into VideoRefiner (VidRefiner) to enhance the video's diversity and visual appeal, ensuring photorealistic and physically consistent results.

Progressive Snow Effect Showcase

Explore how our method progressively adds snow effects: from original video to complete snow accumulation

Progressive Rain Effect Showcase

Explore how our method progressively adds rain effects: from original video to complete rain with puddles

Comparison with State-of-the-Art

Method Env Light/Shadow
Control
Extra Light
Source
Weather
Change
Feed-
forward
4D Dynamic
Scene
Tuning-
free
Open
Source
Cosmos-Transfer2.5
WAN-FUN 2.2
Ditto
WeatherWeaver
WeatherDiffusion
SceneCrafter
RainyGS
WeatherEdit
ClimateNeRF
DiffusionRenderer
Ours

Comparison table notes: "Env light/shadow control" denotes the ability to adjust environment light direction and correct shadow effects; "Extra light source" refers to controlling new light sources; "Feed-forward" means no per-scene optimization; "Tuning-free" implies no fine-tuning weights.

Update: This comparison table has been updated after the website creation. WeatherEdit has been open-sourced since the initial publication.

Acknowledgments

We would like to thank the authors of DiffusionRenderer, WAN-FUN, and π³ (PI3) for making their code publicly available. Our implementation builds upon their valuable contributions.