Lifting Architectural Constraints of Injective Flows

14
citations
#1140
in ICLR 2024
of 2297 papers
6
Top Authors
4
Data Points

Abstract

Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data. However, real data is typically only supported on a lower-dimensional manifold leading the model to expend significant compute on modeling noise. Injective Flows fix this by jointly learning a manifold and the distribution on it. So far, they have been limited by restrictive architectures and/or high computational cost. We lift both constraints by a new efficient estimator for the maximum likelihood loss, compatible with free-form bottleneck architectures. We further show that naively learning both the data manifold and the distribution on it can lead to divergent solutions, and use this insight to motivate a stable maximum likelihood training objective. We perform extensive experiments on toy, tabular and image data, demonstrating the competitive performance of the resulting model.

Citation History

Jan 28, 2026
0
Feb 13, 2026
14+14
Feb 13, 2026
14
Feb 13, 2026
14