Transfer Learning in Physics-Informed Neurals Networks
Full Fine-Tuning, Lightweight Fine-Tuning, and Low-Rank Adaptation
- authored by
- Yizheng Wang, Jinshuai Bai, Mohammad Sadegh Eshaghi, Cosmin Anitescu, Xiaoying Zhuang, Timon Rabczuk, Yinghua Liu
- Abstract
AI for PDEs has garnered significant attention, particularly physics-informed neural networks (PINNs). However, PINNs are typically limited to solving specific problems, and any changes in problem conditions necessitate retraining. Therefore, we explore the generalization capability of transfer learning in the strong and energy forms of PINNs across different boundary conditions, materials, and geometries. The transfer learning methods we employ include full finetuning, lightweight finetuning, and low-rank adaptation (LoRA). Numerical experiments include the Taylor-Green Vortex in fluid mechanics and functionally graded materials with elastic properties, as well as a square plate with a circular hole in solid mechanics. The results demonstrate that full finetuning and LoRA can significantly improve convergence speed while providing a slight enhancement in accuracy. However, the overall performance of lightweight finetuning is suboptimal, as its accuracy and convergence speed are inferior to those of full finetuning and LoRA.
- Organisation(s)
-
Institute of Photonics
- External Organisation(s)
-
Tsinghua University
Bauhaus-Universität Weimar
Queensland University of Technology
- Type
- Article
- Journal
- International Journal of Mechanical System Dynamics
- Volume
- 5
- Pages
- 212-235
- No. of pages
- 24
- ISSN
- 2767-1399
- Publication date
- 25.06.2025
- Publication status
- Published
- Peer reviewed
- Yes
- ASJC Scopus subject areas
- Mechanical Engineering, Control and Systems Engineering
- Electronic version(s)
-
https://doi.org/10.1002/msd2.70030 (Access:
Open)
-
Details in the research portal "Research@Leibniz University"