Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project cleanup when buffers are killed #345

Merged
merged 4 commits into from
Dec 17, 2019

Conversation

elibarzilay
Copy link
Collaborator

There is a general tendency to accumulate many servers since they're
never removed (actually, more than just the servers -- the whole project
resources are kept, but the server is the main problem wrt resources).
This is also mentioned in #256.

So I implemented a function that scans all buffers and cleanup all
projects that have no live buffers. It looks to me like a good idea to
do this, since you can just kill old buffers to reduce resource
usage. (And killing old buffers is more obvious than explicitly
openning the server list to kill old ones, especially since there's no
way to tell if a server is used by some buffer or not.)

(I added this function onto kill-buffer-hook, but if that's too
extreme, then a more mild option is to not do that and just let people
add it themselves.)

@lddubeau
Copy link
Collaborator

(I added this function onto kill-buffer-hook, but if that's too
extreme, then a more mild option is to not do that and just let people
add it themselves.)

I'm one of those who might find it too extreme, but that depends on how efficient we can make the code. I have around 2500 buffers so I'd want anything tacked onto kill-buffer-hook to be speedy.

It seems to me that the current algorithm does more work than strictly necessary. It looks at every single server to find those that are unused and kills them. This is useful to clean up behind messes or situations in which servers have become unpredictably unused. The current tide-cleanup-dead-projects* function could remain as a general utility.

However, when a buffer is killed then there is only one server which may have become unused: the server that was used by the buffer which is being killed. So there is at most one server to reap, not multiple ones, and the function called by the hook could take this into account.

@elibarzilay
Copy link
Collaborator Author

Well, the main question re the "too extreme" thing is whether this kind of functionality is something that you do want for some reason, regardless of performance.

The inefficiency was mainly because I was trying to get something that doesn't require any other changes, and also could be used interactively when needed. If this kind of cleanup is wanted, then it's easy to make it efficient: for each server (or project) maintain a list of buffers, add buffers to this list when they're created, remove them when they're killed -- and on the removal, kill the project if it was the last one.

But there is another option in a different direction. How much time does it take you to run that function with your 2.5k buffers? -- If it's short enough to not be noticeable when running once, then I could move the work to an idle timer. This could also improve things if you do something like open just one file and work on it, kill it and immediately open another file from the same project. If it's done on an idle timer, then in such cases the server wouldn't need to be killed and restarted.

@josteink
Copy link
Collaborator

I agree with @lddubeau here:

  • that's a nice general function!
  • but the function we attach to the kill-buffer-hook only needs to check the state of one particular server, so it really shouldn't do anything more.

If you could implement that, I don't think there would be any need to debate the "extremeness" of the solution, and everyone would be happy 😄

@elibarzilay
Copy link
Collaborator Author

I'm happy that it's wanted, so I'll make it better.

However, I kinda convinced myself there that the idle-timer way is better. Two modes of work that are better with that: open a directory, enter a file and edit, kill and enter a different one, repeat. This is something that I often do, and I expect to be annoyed by constantly paying a price for starting the server again and again and will probably keep a buffer open just to keep it alive. Another mode is having a long running Emacs, and you might not want to have servers go away immediately, just during the night when you're not working -- and an idle timer would make it possible by using a long timer.

But combining this with the minimal work thing will be a bit more complicated (need to find and cancel timers when you enter a buffer), and possibly conflict with the nature of idle-timers (they're always global, so mixing them with a project-specific action can be confusing).

So @lddubeau: can you see how much time it takes to scan your huge list of buffers?

@elibarzilay
Copy link
Collaborator Author

Revised this to use the idle timer as in the last comment, with a custom option that can disable it completely.

@elibarzilay elibarzilay force-pushed the project-cleanup branch 2 times, most recently from 22d3a60 to 32c2d59 Compare November 29, 2019 16:16
There is a general tendency to accumulate many servers since they're
never removed (actually, more than just the servers -- the whole project
resources are kept, but the server is the main problem wrt resources).
This is also mentioned in ananthakumaran#256.

So I implemented a function that scans all buffers and cleanup all
projects that have no live buffers.  It looks to me like a good idea to
do this, since you can just kill old buffers to reduce resource
usage.  (And killing old buffers is more obvious than explicitly
openning the server list to kill old ones, especially since there's no
way to tell if a server is used by some buffer or not.)

(I added this function onto `kill-buffer-hook`, but if that's too
extreme, then a more mild option is to not do that and just let people
add it themselves.)
@elibarzilay
Copy link
Collaborator Author

@josteink, @lddubeau: ping about this one. I changed the implementation to an idle timer which can set to whatever so many buffers shouldn't be a problem, and it can also disable it completely (which I can set as the default which would make it a no-op extension for people who won't set it explicitly).

@josteink
Copy link
Collaborator

Sorry about the delay in getting this expedited. I honestly don't feel this is an area where I have the greatest in depth knowledge about how things currently work in tide, and even less why things work the way they do.

To avoid introducing errors/regressions, if it's OK, I'd rather have someone else review and merge this one.

tide.el Outdated Show resolved Hide resolved
tide.el Show resolved Hide resolved
tide.el Outdated Show resolved Hide resolved
Instead of running it immediately when a buffer is killed, do that in an
idle timer -- but still, only once, after a buffer is killed.  Adjusting
the timer can be useful for people who only want to clean things up
every night.  Even with a short delay it's still useful in case you open
one file at a time in a directory, close it and open another, etc -- in
this case it avoids repeating the work of killing a server and starting
a new one multiple times.

Adds an `tide-project-cleanup-delay` option to control the delay,
possibly set to `nil` to disable all of this, and make
`tide-cleanup-dead-projects` interactive so it's easy to use.
@elibarzilay
Copy link
Collaborator Author

@josteink, re force-pushing, I did the last fix (due to @ananthakumaran's review) and did not squash it; but the thing that makes me feel uneasy about it is that if you were to click merge now, then the commit would stick there -- and I personally like to keep commits clean when they're merged...

(And unfortunately I don't see a good way out of this: I can ask you to let me rebase & squash things before merging but then it's something extra for you to remember; or if you let GH do the squashing then the result will be a bad commit message...)

@josteink
Copy link
Collaborator

josteink commented Dec 2, 2019

then the commit would stick there -- and I personally like to keep commits clean when they're merged...

No worries. Github supports squash-and-commit as a standard way of merging PRs with a lot of rounds back and forwards:

Screenshot-ma  02  des  19:31:15 +0100 2019

No need for you to do anything.

@elibarzilay
Copy link
Collaborator Author

The problem is the commit message is created. I'm not sure if it was GH in the past, or maybe VSTS, but it used to just concatenate all of the commit messages. I see that GH is a bit better than this now, as it lets you edit the squashed commit message -- but if you just click the button and accept the default text you end up with the same bogus text.

(And to clarify the bogosity of such concatenated text, in this PR I now have two commits that I intended to keep, and one is an "oops, fixed blah" -- if it would be squashed with the default concatenated commit message, that "oops" text will be there, though it has no value for anyone looking at the history later. Note that I'm not arguing against squashing (or rebaasing) -- since without it, the "oops" commit would be there as is and therefore relevant, but it's relevant for my own process in making the PR, and the fact that I did 200 commits implementing something is most likely uninteresting for anyone other than me -- and more importantly -- for things like understanding the current code, or bisecting, or whatever...)

tide.el Outdated Show resolved Hide resolved
Move it to a new `tide-cleanup-project-data` which is called by
`tide-cleanup-dead-projects`.
@ananthakumaran ananthakumaran merged commit 41d475b into ananthakumaran:master Dec 17, 2019
@elibarzilay
Copy link
Collaborator Author

@ananthakumaran, Thanks!

(@josteink, this is what I talked about re merges...)

@elibarzilay elibarzilay deleted the project-cleanup branch December 17, 2019 11:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants