Iteration time #634
Replies: 2 comments 6 replies
-
|
Thanks for reporting – this sounds like a memory leak due to caching, potential inside In any case, there might be a better work-around, which likely will also lead to better performance: You could create a stepper function directly: grid = UnitGrid([32, 32]) # generate grid
state = ScalarField.random_uniform(grid, 0.2, 0.3) # generate initial condition
eq = pde.DiffusionPDE()
solver = pde.ExplicitSolver(eq, scheme="euler", adaptive=True) # initialize a solver
stepper = solver.make_stepper(state, dt=1e-3) # initial time stepThe stepper can then be used inside your loop: Note that the |
Beta Was this translation helpful? Give feedback.
-
|
My question fits well to this post because I am running into a memory issue when I am solving the pdes in a loop. You can observe this behavior already for this simplified code As you mentioned before, I also assume some cache isn’t being freed. How can I solve the issue? For me, the main benefit of using eq.solve() is that it has the tracker argument. For my calculation, this argument is essential because it saves the states at specific times during the calculation to an external file. In addition, reading the documentation it is not completely clear why the stepper function can speed up calculations. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am using py-pde in a non-standard way to try and use it to explore (through RL) controllers for PDE/distributed parameter systems. To allow the controller to act on the system, I am looping over the pde with a short time horizon, then applying the control inputs to the resulting state, and using that state as the basis for another short pde run, as in the following pseudo code:
The problem is the amount of time it takes py-pde to run eq.solve (currently using Diffusion) grows (like 5-10x) as I go through thousands of these iterations. I cannot figure out why. I start with taking about 30 sec to go through 50 iterations and after a couple hundred of these I am up to around 100-120 sec to complete 50 iterations. I need such high iterations for training reinforcement learning. I have tried garbage collect, and deleting variables like grid, state, etc. I know the time . The only way I have found to get back down to 30s iteration time is to restart the kernel in my notebook.
I know py-pde wasn't designed to be used (abused) like this, but is there something I can do to keep iteration time down? Is there some variable I can delete, or a memory leak somewhere? I can't figure out why the iteration time grows, or a more workflow-friendly way to restore it.
Beta Was this translation helpful? Give feedback.
All reactions