-
-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace @adjoint
with rrule
#1863
Conversation
This ought to make debugging easier as well. A potential next step would be moving more bits out to NNlib(CUDA). |
src/layers/recurrent.jl
Outdated
# TODO move to ChainRulesCore? | ||
@adjoint function Broadcast.broadcasted(f::Recur, args...) | ||
Zygote.∇map(__context__, f, args...) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the point of this is that the gradient for map
reverses iteration order. That's a little dodgy, since map
makes no such promise (and IIRC it only happens for some argument types, vectors but not 1-column matrices?). Should we just make broadcasting an RNN within a gradient an error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to here, which I think should give the same results, but also warn on the forward pass:
src/utils.jl
Outdated
@nograd modules | ||
ChainRulesCore.@non_differentiable modules(::Any) # is this correct? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the intention of modules is that something roughly like loss + sum(norm, modules(m))
should work, then doesn't this need to pass gradients through?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. I have a sinking feeling this might be one of those things that works with implicit gradients but not with explicit ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise. Xref FluxML/Functors.jl#35 I guess -- is fmapreduce(x -> norm(x.weight), +, m; exclude = x -> x isa Dense)
where we want to end up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that would be one way of doing things. The big question with any approach is how to prevent AD from balking on the cache mutation + lookup.
@@ -23,6 +23,9 @@ end | |||
res, Δ -> (nothing, Zygote.unbroadcast(x, xlogy.(Δ, y)), Zygote.unbroadcast(y, Δ .* x ./ y)) | |||
end | |||
|
|||
ChainRulesCore.@scalar_rule xlogy(x, y) (log(y), x/y) # is this good enough? | |||
ChainRulesCore.@scalar_rule xlogx(x) (log(y) + true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't literally translate broadcasted(::typeof(xlogy)
rule to a Zygote-free world, as unbroadcast
(which sums as necessary for mismatched shapes) belongs to Zygote.
I hope that Diffractor's broadcasting will work via @scalar_rule
. But the rule as written is slightly different, as it doesn't treat Δ==0
as a strong zero, when y==0
. Does that matter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these needed if https://github.com/JuliaStats/LogExpFunctions.jl/blob/c8a4c28ffe7b6e4f8d5253e01cef091bb8d2f42c/src/chainrules.jl#L1-L2 is are already loaded through a transitive dep?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flux could switch to those. It has branches not ifelse, and different NaN behaviour, not sure if that matters:
And 5 dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But for now perhaps it's evidence that the scalar rules are ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you looking to do some testing soon with this and Diffractor/not Zygote? Otherwise I think it would be cleaner to have a separate PR that removes all of the code above in favour of https://github.com/FluxML/Zygote.jl/blob/master/src/lib/logexpfunctions.jl and the @scalar_rule
s in LogExpFunctions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can remove these rules for now if you prefer. The functions ought to be differentiable without special rules, mostly. The PR just wants to translate as many things as possible over for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I said:
as unbroadcast (which sums as necessary for mismatched shapes)
This is wrong, because _check_sizes
demands equal size, simplifying the broadcast:
https://github.com/FluxML/Flux.jl/blob/master/src/losses/utils.jl#L27
While I guess these broadcasts aren't so performance-sensitive (since there will only be one, for the whole model) it would be nice if all loss functions were all second-differentiable. Whether that already works, or needs to be done by fiddling with broadcasting, or rules for the loss functions themselves, I don't know.
923eca0
to
0599968
Compare
@@ -1,6 +1,6 @@ | |||
istraining() = false | |||
|
|||
@adjoint istraining() = true, _ -> nothing | |||
ChainRulesCore.rrule(::typeof(istraining)) = true, _ -> (NoTangent(),) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm surprised there isn't an equivalent for this in ChainRules already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somewhere I was writing a function like CRC.order().back > 0
... would be good to have.
bors try |
tryMerge conflict. |
If you wouldn't mind rebasing, we can get this merged assuming that fixes the cuda tests. |
Co-authored-by: Brian Chen <[email protected]>
To allow use without Zygote, we should move to defining rules via ChainRules.
Most of these are mechanical, but perhaps deserve a quick look to see if there are tests. Comments on particular ones below.