Splat the Net: Radiance Fields with Splattable Neural Primitives

Xilong Zhou* Bao-Huy Nguyen* Loïc Magne Vladislav Golyanik
Thomas Leimkühler Christian Theobalt
*Co-first authors
Max Planck Institute for Informatics

(a) Overview of volumetric splattable neural primitives. Each primitive is spatially bounded by an ellipsoid, and its density is parameterized as a shallow neural network. (b) A real scene rendered using Gaussian primitives (left) and neural primitives (right). Our method achieves comparable PSNR to the Gaussian representation but with fewer primitives, highlighting the expressivity of neural primitives.

Abstract

Radiance fields have emerged as a predominant representation for modeling 3D scene appearance. Neural formulations such as Neural Radiance Fields provide high expressivity but require costly ray marching for rendering, whereas primitive- based methods such as 3D Gaussian Splatting offer real-time efficiency through splatting, yet at the expense of representational power. Inspired by advances in both these directions, we introduce splattable neural primitives, a new volumetric representation that reconciles the expressivity of neural models with the efficiency of primitive-based splatting. Each primitive encodes a bounded neural density field parameterized by a shallow neural network. Our formulation admits an exact analytical solution for line integrals, enabling efficient computation of perspectively accurate splatting kernels. As a result, our representation supports integration along view rays without the need for costly ray marching. The primitives flexibly adapt to scene geometry and, being larger than prior analytic primitives, reduce the number required per scene. On novel-view synthesis benchmarks, our approach matches the quality and speed of 3D Gaussian Splatting while using 10× fewer primitives and 6× fewer parameters. These advantages arise directly from the representation itself, without reliance on complex control or adaptation frameworks.

Positioning of Method



Positioning of our work relative to hallmark radiance field representations. a) Overview of representations organized along two central design dimensions: Atomicity (horizontal axis), spanning from monolithic (left) to distributed (right) representations; Neurality (vertical axis), ranging from non-neural (bottom) to fully neural (top) approaches. Dot color indicates the supported rendering algorithm. b) Illustration of the rendering algorithms associated with each representation. Our method is the only neural, primitive-based model that supports efficient splatting for rendering—thereby eliminating the need for costly ray marching—while retaining the flexibility of a neural design.

Methodology



In splattable neural representation, density field of each primitive is represented as a one-layer multi-layer perceptron bounded by an ellipsoid. a) In this figure, we demonstrate the geometry of our representation for a single primitive. Analytic splatting kernels are computed by performing closed-form integration of a neural density field (green shape) along view rays (blue line). b) Architecture of our neural density field, where density σ is a function of 3D spatial position x.

Expressivity Demonstration

Leveraging a flexible neural density field and analytically exact integration, our method faithfully reproduces complex geometries with a small number of primitives. To demonstrate this, we optimize varying numbers of neural and Gaussian primitives to approximate the density fields of several 3D geometries from multiple views. We observe that a few neural primitives suffice to represent complex and diverse geometries, such as the teapot’s curved handle, the smooth cut in the leaf, and the triangular leaf petiole. In contrast, Gaussian primitives are limited by their symmetric ellipsoidal shape and soft boundaries, making them unsuitable for accurately representing complex solid structures.

Click the video box to play or pause, and drag the sliders to compare them.

Synthetic Examples

We compare our method with 3DGS on the Synthetic NeRF dataset across varying memory budgets. Specifically, we resample the original meshes to target vertex counts and use them to initialize primitive positions for optimization, omitting primitive densification. We also include an “unlimited” setting, in which training follows the standard densification procedure with no primitive budget. Our method outperforms 3DGS under limited memory budgets and achieves performance comparable to 3DGS when no memory constraints are imposed.

Click the video box to play or pause, and drag the sliders to compare them.

Real Examples

We demonsteate numerical comparisons on three real-scene datasets. For each method, we indicate whether it is splatting-based (Spl.) and/or neural (Neu.). We also report image quality (PSNR↑, SSIM↑, LPIPS↓), rendering speed (in FPS), and memory usage (in MB).



Our method achieves high-fidelity reconstructions with image quality and runtime comparable to state-of-the-art splatting-based approaches with analytic primitives, while generally requiring substantially less memory. Compared to monolithic neural representations, our neural splatting-based representation is more than an order of magnitude faster. While T-3DGS attains a similar trade-off, its control mechanisms are orthogonal to our contri- bution, which focuses on the representation itself; “taming” our neural primitives can be expected to yield significant gains as well. Visually, reconstructions are on par with the state-of-the-art methods.

Drag the sliders to compare them.
3DGS Ours GT

We visualize both RGB renderings as well as color-coded primitives in real scenes.

Click the video box to play or pause, and drag the sliders to compare them.

BibTeX

@article{zhou2025splat,
  title={Splat the Net: Radiance Fields with Splattable Neural Primitives},
  author={Zhou, Xilong and Nguyen, Bao-Huy and Magne, Lo{\"\i}c and Golyanik, Vladislav and Leimk{\"u}hler, Thomas and Theobalt, Christian},
  journal={arXiv preprint arXiv:2510.08491},
  year={2025}
}