
Recent advances in radiance field reconstruction, such as 3D Gaussian splatting, enable real-time rendering with high visual fidelity on powerful graphics hardware. However, efficient online transmission and rendering across diverse platforms requires drastic model simplification. DiffSoup represents radiance fields as a soup (i.e., a highly unstructured set) of a small number of triangles with neural textures and binary opacity. We show that this binary opacity is directly differentiable via stochastic opacity masking, enabling stable training without smooth rasterization. DiffSoup can be rasterized using standard depth testing, enabling seamless integration into traditional graphics pipelines and interactive rendering on consumer-grade laptops and mobile devices.