Research

Spatial AI Research

Our work combines cutting-edge research in Gaussian splatting, video diffusion models, and spatial-temporal AI. We build on state-of-the-art techniques to enable spatially-grounded content generation.

Core Focus

Research Focus Areas

Interdisciplinary teams validate breakthroughs with partner communities and XR labs before moving to production.

4D Gaussian Splatting

Extending 3D Gaussian representations to model dynamic scenes with temporal consistency. We use deformation fields and temporal encoding to enable photorealistic 4D reconstruction from video inputs.

Video Diffusion Integration

Merging Gaussian splatting with latent video diffusion models (LVDMs) to enable spatially-aware video generation. Our work focuses on anchoring generative models in explicit 3D space with controllable camera trajectories.

Spatial-Temporal Grounding

Developing techniques to ground AI-generated content in coherent 3D scenes across time. We focus on multi-view consistency, temporal coherence, and explicit spatial control over generative processes.

Publications

Research Areas

01

4D Gaussian Splatting for Dynamic Scenes

Techniques for extending 3D Gaussian representations to model temporal dynamics. Includes deformation fields, temporal encoding, and real-time playback of dynamic 4D scenes.

02

Video Diffusion Meets Gaussian Splatting

Integrating Gaussian splatting with video diffusion models to enable spatially-aware video generation. Focus on camera control, spatial-temporal consistency, and photorealistic quality.

03

Real-time Rendering of Gaussian Splats

Optimization techniques for interactive viewing of 3D Gaussian scenes at 100+ FPS. Includes GPU acceleration, level-of-detail rendering, and efficient splatting algorithms.