You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am using a slightly modified version of visualize_saliency to retrieve, for 1 image, the gradients of every neuron of the last dense layer of my model. My dense layer is composed of 4096 neurons and I thus call visualize_saliency 4096 times for one image. For the purpose of what I am doing, I set the weight of the neuron I want to retrieve the gradients to 1 and all the other neurons weights to 0 at each iteration.
However, while running, my memory increases et the iterations slow down :
After 4 minutes running: 4.71% completed - 80.93 min remaining
After 8 minutes running: 7.52% completed - 98.10 min remaining
After 12 minutes running: 9.58% completed - 113.62 min remaining
After 16 minutes running: 11.25% completed - 126.22 min remaining
...
and remaining time seems to quadratically increase over time
I saw on /issues/71 that Hommoner found the leak is due to every time the line "opt = Optimizer(input_tensor, losses, wrt_tensor=penultimate_output, norm_grads=False)" is called, the tensorflow graph adds a new tensor.
And that a workaround is only get "opt" once and keep it in memory.
However I cannot do that because I change the weights of my last dense layer every iterations and so I need to call Optimizer() every iteration...
Any suggestions on how to resolve this?
The text was updated successfully, but these errors were encountered:
Hi,
I am using a slightly modified version of visualize_saliency to retrieve, for 1 image, the gradients of every neuron of the last dense layer of my model. My dense layer is composed of 4096 neurons and I thus call visualize_saliency 4096 times for one image. For the purpose of what I am doing, I set the weight of the neuron I want to retrieve the gradients to 1 and all the other neurons weights to 0 at each iteration.
However, while running, my memory increases et the iterations slow down :
After 4 minutes running: 4.71% completed - 80.93 min remaining
After 8 minutes running: 7.52% completed - 98.10 min remaining
After 12 minutes running: 9.58% completed - 113.62 min remaining
After 16 minutes running: 11.25% completed - 126.22 min remaining
...
and remaining time seems to quadratically increase over time
I saw on /issues/71 that Hommoner found the leak is due to every time the line "opt = Optimizer(input_tensor, losses, wrt_tensor=penultimate_output, norm_grads=False)" is called, the tensorflow graph adds a new tensor.
And that a workaround is only get "opt" once and keep it in memory.
However I cannot do that because I change the weights of my last dense layer every iterations and so I need to call Optimizer() every iteration...
Any suggestions on how to resolve this?
The text was updated successfully, but these errors were encountered: