Recent advances in diffusion-based video generation have achieved remarkable visual realism but still struggle to obey basic physical laws such as gravity, inertia, and collision. Generated objects often move inconsistently across frames, exhibit implausible dynamics, or violate physical constraints, limiting the realism and reliability of AI-generated videos. We address this gap by introducing Physical Simulator In-the-Loop Video Generation (PSIVG), a novel framework that integrates a physical simulator into the video diffusion process. Starting from a template video generated by a pre-trained diffusion model, PSIVG reconstructs the 4D scene and foreground object meshes, initializes them within a physical simulator, and generates physically consistent trajectories. These simulated trajectories are then used to guide the video generator toward spatio-temporally physically coherent motion. To further improve texture consistency during object movement, we propose a Test-Time Texture Consistency Optimization (TTCO) technique that adapts text and feature embeddings based on pixel correspondences from the simulator. Comprehensive experiments demonstrate that PSIVG produces videos that better adhere to real-world physics while preserving visual quality and diversity.
Overview of our Physical Simulator In-the-loop Video Generation (PSIVG) framework. From an input prompt, a template video is first generated, and is processed by our perception pipeline. The outputs of the perception pipeline are further processed before being passed into the physical simulator. The rendered outputs from the simulator are then used for video generation, and this video generation can be improved with TTCO for better texture consistency.
We compare PSIVG against representative baselines across diverse fixed-camera and moving-camera scenarios. As shown, our method generates videos that exhibit physically consistent and temporally coherent motion. In contrast, existing text-to-video models (e.g., CogVideoX [1], HunyuanVideo [2], PISA-Seg [3]) tend to produce visually appealing but physically implausible motion, such as objects floating in midair, fading away, or jumping around. Similarly, controllable video generation approaches (e.g. MotionClone [4], SG-I2V [5]) often still struggle with following the trajectory, especially in terms of rotations, and often do not preserve the consistency of the object and background well.
Baseline References.
Thanks for showing interest in our work! If you liked our work, you may also be interested in the following related and concurrent works:
@article{foo2026physical,
title={Physical Simulator In-the-Loop Video Generation},
author={Foo, Lin Geng and Huang, Mark He and Lattas, Alexandros and Moschoglou, Stylianos and Beeler, Thabo and Theobalt, Christian},
journal={arXiv preprint arXiv:2603.06408},
year={2026}
}