Communications in Mathematical Sciences

Volume 21 (2023)

Number 6

Sobolev training for physics-informed neural networks

Pages: 1679 – 1705

DOI: https://dx.doi.org/10.4310/CMS.2023.v21.n6.a11

Authors

Hwijae Son (Department of Artificial Intelligence Software, Hanbat National University, Daejeon, Republic of Korea)

Jin Woo Jang (Department of Mathematics, Pohang University of Science and Technology, Pohang, Republic of Korea)

Woo Jin Han (Department of Mathematics, Pohang University of Science and Technology, Pohang, Republic of Korea)

Hyung Ju Hwang (Department of Mathematics, Pohang University of Science and Technology, Pohang, Republic of Korea)

Abstract

Physics-Informed Neural Networks (PINNs) are promising applications of deep learning. The smooth architecture of a fully connected neural network is appropriate for finding the solutions of PDEs; the corresponding loss function can also be intuitively designed and guarantees convergence for various kinds of PDEs. However, the high computational cost required to train neural networks has been considered as a weakness of this approach. This paper proposes Sobolev-PINNs, a novel loss function for the training of PINNs, making the training substantially efficient. Inspired by the recent studies that incorporate derivative information for the training of neural networks, we develop a loss function that guides a neural network to reduce the error in the corresponding Sobolev space. Surprisingly, a simple modification of the loss function can make the training process similar to Sobolev Training although PINNs are not fully supervised learning tasks. We provide several theoretical justifications that the proposed loss functions upper bound the error in the corresponding Sobolev spaces for the viscous Burgers equation and the kinetic Fokker–Planck equation. We also present several simulation results, which show that compared with the traditional $L^2$ loss function, the proposed loss function guides the neural network to a significantly faster convergence. Moreover, we provide empirical evidence that shows that the proposed loss function, together with the iterative sampling techniques, performs better in solving high-dimensional PDEs.

Keywords

physics-informed neural networks, Sobolev training, partial differential equations, neural networks

2010 Mathematics Subject Classification

35Q84, 65M99, 68Txx

The full text of this article is unavailable through your IP address: 44.210.77.73

Hwijae Son is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2022R1F1A1073732) and the research fund of Hanbat National University in 2022. Jin Woo Jang is supported by the German Science Foundation (DFG) CRC 1060, by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) NRF-2022R1G1A1009044, and by the Basic Science Research Institute Fund of Korea NRF-2021R1A6A1A10042944. Hyung Ju Hwang is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2022-00165268, NRF-2019R1A5A1028324).

Received 12 May 2022

Received revised 5 December 2022

Accepted 15 December 2022

Published 22 September 2023