Gauss Newton Method for Solving Variational Problems of PDEs with Neural Network Discretizaitons

The numerical solution of differential equations using machine learning-based approaches has gained significant popularity. Neural network-based discretization has emerged as a powerful tool for solving differential equations by parameterizing a set of functions. Various approaches, such as the deep...

Full description

Saved in:
Bibliographic Details
Published inJournal of scientific computing Vol. 100; no. 1; p. 17
Main Authors Hao, Wenrui, Hong, Qingguo, Jin, Xianlin
Format Journal Article
LanguageEnglish
Published New York Springer US 01.07.2024
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0885-7474
1573-7691
1573-7691
DOI10.1007/s10915-024-02535-z

Cover

More Information
Summary:The numerical solution of differential equations using machine learning-based approaches has gained significant popularity. Neural network-based discretization has emerged as a powerful tool for solving differential equations by parameterizing a set of functions. Various approaches, such as the deep Ritz method and physics-informed neural networks, have been developed for numerical solutions. Training algorithms, including gradient descent and greedy algorithms, have been proposed to solve the resulting optimization problems. In this paper, we focus on the variational formulation of the problem and propose a Gauss–Newton method for computing the numerical solution. We provide a comprehensive analysis of the superlinear convergence properties of this method, along with a discussion on semi-regular zeros of the vanishing gradient. Numerical examples are presented to demonstrate the efficiency of the proposed Gauss–Newton method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0885-7474
1573-7691
1573-7691
DOI:10.1007/s10915-024-02535-z