Top Authors
Abstract
Training-free guided generation is a widely used and powerful technique that allows the end user to exert further control over the generative process of flow/diffusion models. Generally speaking, two families of techniques have emerged for solving this problem forgradient-based guidance: namely,posterior guidance(i.e., guidance via projecting the current sample to the target distribution via the target prediction model) andend-to-end guidance(i.e., guidance by performing backpropagation throughout the entire ODE solve). In this work, we show that these two seemingly separate families can actually beunifiedby looking at posterior guidance as agreedy strategyofend-to-end guidance. We explore the theoretical connections between these two families and provide an in-depth theoretical of these two techniques relative to thecontinuous ideal gradients. Motivated by this analysis we then show a method forinterpolatingbetween these two families enabling a trade-off between compute and accuracy of the guidance gradients. We then validate this work on several inverse image problems and property-guided molecular generation.