Gradient Alignment in Physics-informed Neural Networks: A Second-Order Optimization Perspective

38citations
arXiv:2502.00604
38
citations
#143
in NEURIPS 2025
of 5858 papers
4
Top Authors
6
Data Points

Abstract

Multi-task learning through composite loss functions is fundamental to modern deep learning, yet optimizing competing objectives remains challenging. We present new theoretical and practical approaches for addressing directional conflicts between loss terms, demonstrating their effectiveness in physics-informed neural networks (PINNs) where such conflicts are particularly challenging to resolve. Through theoretical analysis, we demonstrate how these conflicts limit first-order methods and show that second-order optimization naturally resolves them through implicit gradient alignment. We prove that SOAP, a recently proposed quasi-Newton method, efficiently approximates the Hessian preconditioner, enabling breakthrough performance in PINNs: state-of-the-art results on 10 challenging PDE benchmarks, including the first successful application to turbulent flows with Reynolds numbers up to 10,000, with 2-10x accuracy improvements over existing methods. We also introduce a novel gradient alignment score that generalizes cosine similarity to multiple gradients, providing a practical tool for analyzing optimization dynamics. Our findings establish frameworks for understanding and resolving gradient conflicts, with broad implications for optimization beyond scientific computing.

Citation History

Jan 26, 2026
0
Jan 26, 2026
32+32
Feb 3, 2026
36+4
Feb 13, 2026
38+2
Feb 13, 2026
38
Feb 13, 2026
38