-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
htlcswitch: use fn.GoroutineManager #9140
base: master
Are you sure you want to change the base?
Conversation
Important Review skippedAuto reviews are limited to specific labels. 🏷️ Labels to auto review (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
8810118
to
88fbc4b
Compare
8395cca
to
e001027
Compare
@starius - I think these unit test failures are related to this PR - maybe take a look at fixing those up first & then re-ping reviewers when ready? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't reproduce the race condition with the attached test, do you have an error trace of it?
htlcswitch/switch.go
Outdated
}) | ||
|
||
// The switch shutting down is signaled by closing the channel. | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the switch shutting down and an error from the goroutine manager are different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GoroutineManager can only return an error, if it is stopping. I added a check just in case:
// The switch shutting down is signaled by closing the channel.
if errors.Is(err, fn.ErrStopping) {
close(resultChan)
} else if err != nil {
return nil, fmt.Errorf("got an unexpected error from "+
"GoroutineManager: %w", err)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I think this is an API design flaw. My latest review adds a suggestion. Basically: i dont think the caller should need to know that the only error the goroutine manger can return is ErrStopping. I also dont think that that is actually an error - more just a state we want to handle. See latest review for more details
htlcswitch/switch.go
Outdated
}() | ||
}) | ||
if err != nil { | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't think this should return?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed, added a comment. Now this section looks like this:
// When this time ticks, then it indicates that we should
// collect all the forwarding events since the last internal,
// and write them out to our log.
case <-s.cfg.FwdEventTicker.Ticks():
// The error of Go is ignored: if it is shutting down,
// the loop will terminate on the next iteration, in
// s.gm.Done case.
_ = s.gm.Go(func(ctx context.Context) {
err := s.FlushForwardingEvents()
if err != nil {
log.Errorf("unable to flush "+
"forwarding events: %v", err)
}
})
I pushed branch reproduce-race to my fork. In that branch:
|
7cb95ef
to
662c47b
Compare
Test failure was caused by extra call to s.Stop in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates @starius!
Logic looks good, but I have some opinions about the API of the fn.Go
call that I think is worth discussing before we merge. Would love to hear what @yyforyongyu & @ProofOfKeags think too.
htlcswitch/switch.go
Outdated
}) | ||
// The switch shutting down is signaled by closing the channel. | ||
if errors.Is(err, fn.ErrStopping) { | ||
close(resultChan) | ||
} else if err != nil { | ||
return nil, fmt.Errorf("got an unexpected error from "+ | ||
"GoroutineManager: %w", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
related to my comment above: from what I can tell, the only error that gm.Go(..)
will ever return is fn.ErrStopping
or nil
. So to me, it returning this is not actually an error but more just a "state transition" we should be aware of. Which I think is a point towards handling this explicitly in the actual callback passed to Go
via a quit channel as mentioned above.
If we do want some idea of if the goroutine manager did its thing from outside the call-back (cause it could also be that the call-back never gets called), then I think a simple bool
could to the trick since it basically just communicates "handled/not handled due to shutdown"
What's the prio on this? I want to review but I need to balance with other stuff. |
Not critical. You can focus on P0 stuff, before addressing this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry a bit late in the game, but is there an issue page describing what the issue is?
I also don't understand the struct GoroutineManager
- it looks like it's putting a mutex to guard the wait group operations?
My instinct is this is solving the wrong problem - we should always know when/where we call wg.Add
and wg.Wait
, if not, we should refactor our code so we always know when we cal wg.Add
and wg.Wait
. I guess other people have run into this issue before too.
htlcswitch/switch.go
Outdated
@@ -245,8 +246,14 @@ type Switch struct { | |||
// This will be retrieved by the registered links atomically. | |||
bestHeight uint32 | |||
|
|||
wg sync.WaitGroup | |||
quit chan struct{} | |||
// TODO(yy): remove handleLocalResponseWG, once handleLocalResponse runs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm why it's my TODO😂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
handleLocalResponseWG
was added because handleLocalResponse
is called in a goroutine, which can't be tracked using GoroutineManagaer. There is an existing TODO(yy) to remove the goroutine running handleLocalResponse
. I copy-pasted that TODO here, since if that TODO is fixed, then handleLocalResponseWG is not needed, so this TODO is also fixed :-)
This was requested in lightningnetwork#9140 (comment)
This was requested in lightningnetwork#9140 (comment)
htlcswitch/switch.go
Outdated
// unclear if it safe to skip handleLocalResponse. | ||
handleLocalResponseWG sync.WaitGroup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe this is necessary as I believe that the composition of waitgroups is equivalent to the waitgroup of the composition of threads when the wait conditions are always called in conjunction with one another.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this offline.
The main reason for the special handling of handleLocalResponse
was that, in the old version, it was launched unconditionally (even when the switch was stopping), and switching to GoroutineManager
introduces a change in behavior. However, we need to ensure that the effects are idempotent to prevent inconsistent states in the event of power failures. If this requirement is met, the behavior change should not pose an issue.
Therefore, I removed the special treatment of handleLocalResponse
, along with the associated WaitGroup
and TODOs. It is now managed entirely by GoroutineManager
.
@yyforyongyu Thank you for the suggestion!
In this setup,
I agree that, ideally, the code should be refactored into an event-loop style, centralizing all goroutine launches and state changes within a single goroutine and using channels to transmit data to and from it. This approach aligns with the patterns we follow in other packages. However, implementing such a change would require significant time and extensive modifications to the package. What are your thoughts? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I squashed the last commit (deeacc6), rebased and used GoroutineManager from fn v2. Fortunately fn v1 and fn v2 can be used simultaneously! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates, I think things look good but i think we should change the API of the goroutine manager a bit more. See my suggestion here
htlcswitch/switch.go
Outdated
@@ -836,7 +847,8 @@ func (s *Switch) logFwdErrs(num *int, wg *sync.WaitGroup, fwdChan chan error) { | |||
log.Errorf("Unhandled error while reforwarding htlc "+ | |||
"settle/fail over htlcswitch: %v", err) | |||
} | |||
case <-s.quit: | |||
|
|||
case <-ctx.Done(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something doesnt feel right here. It feels like we are mixing the use of caller ctx and quit channels. Here, they mean the same thing: so ie, why cant we just listen on s.gm.Done()
here (ie, s.quit)? because this ctx that is now being passed in here is not coming from the caller of ForwardPackets
and is instead coming from the creator of the the gm
. I think the issue is stemming from the fact that we are passing a context to the constructor of the goroutine manager which is an anti-pattern. Im gonna see if I can work the goroutine manager a bit to work around this anti-pattern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I replaced ctx.Done()
with s.gm.Done()
here and also inside a goroutine launched by GetAttemptResult
.
htlcswitch/switch.go
Outdated
@@ -368,8 +370,11 @@ func New(cfg Config, currentHeight uint32) (*Switch, error) { | |||
return nil, err | |||
} | |||
|
|||
gm := fn2.NewGoroutineManager(context.Background()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's an anti-pattern to pass a context into a constructor. I think we should try to avoid this as much as possible. I'll put up a suggested diff for the goroutine manager 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I updated fn
dependency and used new API!
Updated protofsm package for changed API of fn.GoroutineManager.
Replaced the use of s.quit and s.wg with s.gm (GoroutineManager). This fixes a race condition between s.wg.Add(1) and s.wg.Wait(). Also added a test which used to fail under `-race` before this commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -85,6 +86,9 @@ var ( | |||
// fail payments if they increase our fee exposure. This is currently | |||
// set to 500m msats. | |||
DefaultMaxFeeExposure = lnwire.MilliSatoshi(500_000_000) | |||
|
|||
// background is a shortcut for context.Background. | |||
background = context.Background() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i dont think we should do this. Rather use a context.TODO()
where needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you rebase on top of #9344, then we can also add a context guard here and then we only need a single context.TODO() in Start()
// background is a shortcut for context.Background. | ||
background = context.Background() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should not do this.
consider rebasing on top of #9342 which handles the bump to the correct fn
version and handles updating the statemachine to thread contexts through correctly
var n *networkResult | ||
select { | ||
case n = <-nChan: | ||
case <-s.quit: | ||
case <-s.gm.Done(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it is not great to refer to s.gm
from inside a call-back that is called from s.gm
(it screams "deadlock"). Rather just use the ctx
provided to the callback which will be cancelled when the gm is shutdown (ie, when gm.Done() would have returned anyways)
// The error of Go is ignored: if it is shutting down, | ||
// the loop will terminate on the next iteration, in | ||
// s.gm.Done case. | ||
_ = s.gm.Go(background, func(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let htlcForwarder
take a context and pass in a context in there from the goroutine which is starting it
@@ -3020,8 +3042,12 @@ func (s *Switch) handlePacketSettle(packet *htlcPacket) error { | |||
// NOTE: `closeCircuit` modifies the state of `packet`. | |||
if localHTLC { | |||
// TODO(yy): remove the goroutine and send back the error here. | |||
s.wg.Add(1) | |||
go s.handleLocalResponse(packet) | |||
ok := s.gm.Go(background, func(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rather pass in a context to the calling func. Same for all the others
@Crypt-iQ: review reminder |
Change Description
Replaced the use of
s.quit
ands.wg
withs.gm
(GoroutineManager
). WaitGroup is still needed to wait forhandleLocalResponse
: if it was switched tos.gm
, then it may skip running, which has unclear consequences. AfterhandleLocalResponse
is changed to run without a goroutine, we can remove WaitGroup completely.This fixes a race condition between
s.wg.Add(1)
ands.wg.Wait()
.Steps to Test
I added a test which used to fail under
-race
before this commit.This test crashes with a data race if I undo the changes of implementation of switch.
Pull Request Checklist
Testing
Code Style and Documentation
[skip ci]
in the commit message for small changes.