Watch: restarts tasks only when at least one watched file changes.#2493
Watch: restarts tasks only when at least one watched file changes.#2493trulede wants to merge 1 commit intogo-task:mainfrom
Conversation
|
This needs a rebase. |
7e05e57 to
7fe79c7
Compare
7fe79c7 to
73fb881
Compare
|
@andreynering rebased |
There was a problem hiding this comment.
Pull request overview
This PR adjusts the watch-execution flow so watched tasks are only cancelled/restarted when at least one watched file (per the configured sources) actually changes, preventing “cancel without restart” scenarios like the one described in #2477.
Changes:
- Pre-compute whether the fsnotify event matches watched sources before cancelling the current context.
- Cancel and restart all watched tasks only if at least one match is found.
- Skip any action when the event does not match watched sources.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| go func() { | ||
| err = e.RunTask(ctx, c) | ||
| if err == nil { | ||
| e.Logger.Errf(logger.Green, "task: task \"%s\" finished running\n", c.Task) | ||
| } else if !isContextError(err) { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| } |
There was a problem hiding this comment.
The goroutine assigns to the outer err variable (err = e.RunTask(...)). This introduces a data race between goroutines and also clobbers the function-scope err (used for watcher creation / error handling). Make err a new local inside the goroutine (e.g., err := ...) so each task run has its own error value.
| // Cancel _all_ watched tasks, get new context, then restart the tasks. | ||
| if watchCount > 0 { | ||
| cancel() | ||
| ctx, cancel = context.WithCancel(context.Background()) |
There was a problem hiding this comment.
e.Compiler.ResetCache() was previously called on watch events before restarting tasks, which clears the dynamic variable cache. With the current change, watched tasks restart without resetting this cache, so dynamic vars may remain stale across restarts. Consider resetting the compiler cache inside the watchCount > 0 block before starting the new runs.
| ctx, cancel = context.WithCancel(context.Background()) | |
| ctx, cancel = context.WithCancel(context.Background()) | |
| e.Compiler.ResetCache() |
| t, err := e.GetTask(c) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| continue | ||
| } | ||
| baseDir := filepathext.SmartJoin(e.Dir, t.Dir) | ||
| files, err := e.collectSources(calls) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| continue | ||
| } |
There was a problem hiding this comment.
e.collectSources(calls) is recomputed for every task in the loop, but it returns the same combined source list each time for this event. This adds unnecessary traversal/globbing work on every fsnotify event. Compute files once before iterating over calls (or short-circuit after the first match) to reduce per-event overhead.
| for _, c := range calls { | ||
| go func() { | ||
| if ShouldIgnore(event.Name) { | ||
| e.Logger.VerboseErrf(logger.Magenta, "task: event skipped for being an ignored dir: %s\n", event.Name) | ||
| return | ||
| } | ||
| t, err := e.GetTask(c) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| return | ||
| } | ||
| baseDir := filepathext.SmartJoin(e.Dir, t.Dir) | ||
| files, err := e.collectSources(calls) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| return | ||
| } | ||
|
|
||
| if !event.Has(fsnotify.Remove) && !slices.Contains(files, event.Name) { | ||
| relPath, _ := filepath.Rel(baseDir, event.Name) | ||
| e.Logger.VerboseErrf(logger.Magenta, "task: skipped for file not in sources: %s\n", relPath) | ||
| return | ||
| } | ||
| err = e.RunTask(ctx, c) | ||
| if err == nil { | ||
| e.Logger.Errf(logger.Green, "task: task \"%s\" finished running\n", c.Task) | ||
| } else if !isContextError(err) { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| } | ||
| }() | ||
| if ShouldIgnore(event.Name) { | ||
| e.Logger.VerboseErrf(logger.Magenta, "task: event skipped for being an ignored dir: %s\n", event.Name) | ||
| continue | ||
| } |
There was a problem hiding this comment.
ShouldIgnore(event.Name) does not depend on the task call, but it's checked (and logged) once per calls entry, which can spam duplicate "ignored dir" messages on a single event. Consider moving the ignore check outside the for _, c := range calls loop and returning early when ignored.
| // Count the tasks that would be restarted by the watch (_before_ cancelling them). | ||
| watchCount := 0 | ||
| for _, c := range calls { | ||
| go func() { | ||
| if ShouldIgnore(event.Name) { | ||
| e.Logger.VerboseErrf(logger.Magenta, "task: event skipped for being an ignored dir: %s\n", event.Name) | ||
| return | ||
| } | ||
| t, err := e.GetTask(c) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| return | ||
| } | ||
| baseDir := filepathext.SmartJoin(e.Dir, t.Dir) | ||
| files, err := e.collectSources(calls) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| return | ||
| } | ||
|
|
||
| if !event.Has(fsnotify.Remove) && !slices.Contains(files, event.Name) { | ||
| relPath, _ := filepath.Rel(baseDir, event.Name) | ||
| e.Logger.VerboseErrf(logger.Magenta, "task: skipped for file not in sources: %s\n", relPath) | ||
| return | ||
| } | ||
| err = e.RunTask(ctx, c) | ||
| if err == nil { | ||
| e.Logger.Errf(logger.Green, "task: task \"%s\" finished running\n", c.Task) | ||
| } else if !isContextError(err) { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| } | ||
| }() | ||
| if ShouldIgnore(event.Name) { | ||
| e.Logger.VerboseErrf(logger.Magenta, "task: event skipped for being an ignored dir: %s\n", event.Name) | ||
| continue | ||
| } | ||
| t, err := e.GetTask(c) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| continue | ||
| } | ||
| baseDir := filepathext.SmartJoin(e.Dir, t.Dir) | ||
| files, err := e.collectSources(calls) | ||
| if err != nil { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| continue | ||
| } | ||
| if !event.Has(fsnotify.Remove) && !slices.Contains(files, event.Name) { | ||
| relPath, _ := filepath.Rel(baseDir, event.Name) | ||
| e.Logger.VerboseErrf(logger.Magenta, "task: skipped for file not in sources: %s\n", relPath) | ||
| continue | ||
| } | ||
| watchCount++ | ||
| } | ||
| // Cancel _all_ watched tasks, get new context, then restart the tasks. | ||
| if watchCount > 0 { | ||
| cancel() | ||
| ctx, cancel = context.WithCancel(context.Background()) | ||
| for _, c := range calls { | ||
| c := c | ||
| go func() { | ||
| err = e.RunTask(ctx, c) | ||
| if err == nil { | ||
| e.Logger.Errf(logger.Green, "task: task \"%s\" finished running\n", c.Task) | ||
| } else if !isContextError(err) { | ||
| e.Logger.Errf(logger.Red, "%v\n", err) | ||
| } | ||
| }() | ||
| } | ||
| } |
There was a problem hiding this comment.
The new behavior (only cancel/restart when the event matches at least one watched source) is not covered by the existing watch tests. Consider adding a watch_test.go case that triggers a fsnotify event on a file outside sources and asserts that no additional task run happens (and the current run isn't canceled).
When a task(s) is watched, and a watched event occurs, then; the current context is cancelled (which signals the task and sub-tasks), then the watch conditions are evaluated, and finally if the watch conditions are satsified, then the task(s) are restarted.
This means that if a watch condition is not satisified (i.e. "skipped for file not in sources") then the task is cancelled and not restarted.
In this PR the logic is altered as follows: First the watch condition is evaluated, if at least one task(s) satisifies the condition then all tasks are cancelled, and then all tasks restarted. When no task satisified the watch contition, no action is taken. Given the way this code behaves, I believe its an improvement within the scope of the known limitations of the method used (i.e. the Context object).
closes #2477