Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace @adjoint with rrule #1863

Merged
merged 4 commits into from
Feb 24, 2022
Merged

Replace @adjoint with rrule #1863

merged 4 commits into from
Feb 24, 2022

Conversation

mcabbott
Copy link
Member

@mcabbott mcabbott commented Feb 5, 2022

To allow use without Zygote, we should move to defining rules via ChainRules.

Most of these are mechanical, but perhaps deserve a quick look to see if there are tests. Comments on particular ones below.

@ToucheSir
Copy link
Member

This ought to make debugging easier as well. A potential next step would be moving more bits out to NNlib(CUDA).

src/cuda/cudnn.jl Show resolved Hide resolved
src/layers/normalise.jl Show resolved Hide resolved
Comment on lines 438 to 441
# TODO move to ChainRulesCore?
@adjoint function Broadcast.broadcasted(f::Recur, args...)
Zygote.∇map(__context__, f, args...)
end
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the point of this is that the gradient for map reverses iteration order. That's a little dodgy, since map makes no such promise (and IIRC it only happens for some argument types, vectors but not 1-column matrices?). Should we just make broadcasting an RNN within a gradient an error?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to here, which I think should give the same results, but also warn on the forward pass:

https://github.com/FluxML/Flux.jl/pull/1863/files#diff-5b453f8f7fb34afbebfc6f688a8209aa0532c8b1c3e95393f97afcbc37a473e7R41-R46

src/utils.jl Outdated
Comment on lines 791 to 792
@nograd modules
ChainRulesCore.@non_differentiable modules(::Any) # is this correct?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the intention of modules is that something roughly like loss + sum(norm, modules(m)) should work, then doesn't this need to pass gradients through?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I have a sinking feeling this might be one of those things that works with implicit gradients but not with explicit ones.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Likewise. Xref FluxML/Functors.jl#35 I guess -- is fmapreduce(x -> norm(x.weight), +, m; exclude = x -> x isa Dense) where we want to end up?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that would be one way of doing things. The big question with any approach is how to prevent AD from balking on the cache mutation + lookup.

@@ -23,6 +23,9 @@ end
res, Δ -> (nothing, Zygote.unbroadcast(x, xlogy.(Δ, y)), Zygote.unbroadcast(y, Δ .* x ./ y))
end

ChainRulesCore.@scalar_rule xlogy(x, y) (log(y), x/y) # is this good enough?
ChainRulesCore.@scalar_rule xlogx(x) (log(y) + true)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't literally translate broadcasted(::typeof(xlogy) rule to a Zygote-free world, as unbroadcast (which sums as necessary for mismatched shapes) belongs to Zygote.

I hope that Diffractor's broadcasting will work via @scalar_rule. But the rule as written is slightly different, as it doesn't treat Δ==0 as a strong zero, when y==0. Does that matter?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flux could switch to those. It has branches not ifelse, and different NaN behaviour, not sure if that matters:

https://github.com/JuliaStats/LogExpFunctions.jl/blob/584442d9bd4c4abadfb5daed86cefa5fabfff645/src/basicfuns.jl#L17-L30

And 5 dependencies.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But for now perhaps it's evidence that the scalar rules are ok?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you looking to do some testing soon with this and Diffractor/not Zygote? Otherwise I think it would be cleaner to have a separate PR that removes all of the code above in favour of https://github.com/FluxML/Zygote.jl/blob/master/src/lib/logexpfunctions.jl and the @scalar_rules in LogExpFunctions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can remove these rules for now if you prefer. The functions ought to be differentiable without special rules, mostly. The PR just wants to translate as many things as possible over for now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I said:

as unbroadcast (which sums as necessary for mismatched shapes)

This is wrong, because _check_sizes demands equal size, simplifying the broadcast:

https://github.com/FluxML/Flux.jl/blob/master/src/losses/utils.jl#L27

While I guess these broadcasts aren't so performance-sensitive (since there will only be one, for the whole model) it would be nice if all loss functions were all second-differentiable. Whether that already works, or needs to be done by fiddling with broadcasting, or rules for the loss functions themselves, I don't know.

@mcabbott mcabbott force-pushed the chainrules branch 2 times, most recently from 923eca0 to 0599968 Compare February 5, 2022 19:40
@mcabbott mcabbott marked this pull request as ready for review February 14, 2022 05:05
@@ -1,6 +1,6 @@
istraining() = false

@adjoint istraining() = true, _ -> nothing
ChainRulesCore.rrule(::typeof(istraining)) = true, _ -> (NoTangent(),)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm surprised there isn't an equivalent for this in ChainRules already.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somewhere I was writing a function like CRC.order().back > 0... would be good to have.

@ToucheSir
Copy link
Member

bors try

@bors
Copy link
Contributor

bors bot commented Feb 24, 2022

try

Merge conflict.

@ToucheSir ToucheSir closed this Feb 24, 2022
@ToucheSir ToucheSir reopened this Feb 24, 2022
@ToucheSir
Copy link
Member

If you wouldn't mind rebasing, we can get this merged assuming that fixes the cuda tests.

src/cuda/cuda.jl Outdated Show resolved Hide resolved
Co-authored-by: Brian Chen <[email protected]>
@mcabbott mcabbott merged commit 525b645 into FluxML:master Feb 24, 2022
@mcabbott mcabbott deleted the chainrules branch February 24, 2022 15:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants