Description
Hello Dr. Lu, I mostly appreciate your work on the development of the DeepXDE library. I have been using the library for some time now for research and lately I found out that the L-BFGS optimizer stops at exactly 30 iterations. I have been using it without an issue until a few weeks ago, when I observed this behaviour. This is quite strange, because I observed this "early stopping" behaviour while using code, for which the L-BFGS was functioning properly, meaning that I was able to use L-BFGS for as many iterations as I wanted. The environment I am using Google Colab.
To make sure that I did not brake something in my code, I tried some of the demo code from the DeepXDE documentation, also code from your work on sampling strategies. The result is always the same: ADAM works perfectly for as many iterations as I want, but when the training advances to L-BFGS, the training stops at exactly 30 iterations. I tried tweaking the gtol and the ftol parameters, but no luck. The arithmetic precision has been set to float64 as always. In general, the issue I am facing is that without changing anything in the conditions of the code I am experimenting with, the L-BFGS optimizer stops at 30 iterations. Any advice on this would be very much appreciated. Thank you.