Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worker specific hashrate tracking extension (SOLUTION N. 3) #107

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

GitGab19
Copy link
Collaborator

@GitGab19 GitGab19 commented Nov 7, 2024

As described in this doc, this extension is the solution to a specific request we got from several early adopters.
It allows to get workers granularity on shares submitted through extended-channels, without losing the aggregation advantages of them.

I described two possible approaches for changes that touch the SubmitSharesExtended message. I would really appreciate any feedback on this, so that we can pick up the best option.

@xraid
Copy link

xraid commented Nov 7, 2024

if using a local MD monitor (HR temp etc) pass trought in separate channel to pool if requested can be had.

connection pool for work should be a clear clean channel . statistics another ?

Q : is request to be able monitor MD local to pool or do pool want statistics of remote HR connections . Why ...

|---------------|------------------|---------------------------------------|
| channel_id | U32 | Channel identifier |
| request_id | U32 | Unique request ID |
| user_id | SEQ0_64K[U32] | Unique IDs for each worker |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a u16, not u32.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would mean that this extension would support a maximum of 65536 machines. Do you think it's suitable also for very large operations?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this across an entire pool or just a farm level consideration? either way, 65536 is likely insufficient for large farms.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If 64k isn't enough then we need to switch to the version that just includes the full name. Making the pool store more than 64k entries in a hashmap for each connection isn't really a great option IMO.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making the pool store more than 64k entries in a hashmap for each connection isn't really a great option IMO.

Why do you think this? Isn't it what pools are already doing more or less since they receive shares from every worker?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fi3 would you prefer to set a fixed predetermined bytes limit, of a more flexible way such the one I was proposing? PS: in the meantime I'm going to update option n.1 on doc to make it an extension

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I modified option n.1 on doc, and opened a new PR to implement it as an extension: #108

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why adding new fields at the end of existing messages should break existing clients/servers. Those fields should simply be ignored by existing code. If not, we really need to define a way to add fields to existing messages without wrecking compatibility because that's kinda important :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially defined solution n.1 without extensions in the doc, but feedback got from @Fi3 made me change it to an extension.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why adding new fields at the end of existing messages should break existing clients/servers. Those fields should simply be ignored by existing code. If not, we really need to define a way to add fields to existing messages without wrecking compatibility because that's kinda important :)

Adding field to message will break the parser, I don't see how we can have optional fields without changing the binary format. Maybe we can add to each message a field called optional data that is an B016M, but it seems pretty crazy to me. I think that having extensions + sv2 versions will be enough.

@GitGab19
Copy link
Collaborator Author

GitGab19 commented Nov 7, 2024

if using a local MD monitor (HR temp etc) pass trought in separate channel to pool if requested can be had.

connection pool for work should be a clear clean channel . statistics another ?

Q : is request to be able monitor MD local to pool or do pool want statistics of remote HR connections . Why ...

The point you're making is very similar to point n.4 of the linked document, we are going to get there in the future for sure.
Answer to your question: request is to be able to monitor individual hashrate at the pool level.

@xraid
Copy link

xraid commented Nov 7, 2024

yes but is pool level local or remote ?

why would pool want spend BW on singular MD at several PH remote farm ?

maybe You are shouting yourself in foot by trying bend over for stupid metrics ?

i asked . do Your requests need stats for remote worker to pool . Why ...

@GitGab19
Copy link
Collaborator Author

GitGab19 commented Nov 7, 2024

The pool is remote of course. Nowadays they are spending the same bandwidth (actually more) than what is proposed in this extension (because of the worker_name field inside every SV1 mining.submit message). If you want to have general discussion about the reasons behind this request, I would suggest you to do it on the linked doc or discord. Here we should keep the focus on technical reviews of the PR 🙏

@xraid
Copy link

xraid commented Nov 7, 2024

so for ex. a farm proxy aggregating to a pool say braiins farm proxy . i do not think they by default try measure farm individual workers (MD) . why in that case try have a farm proxy = aggregate

hashers locally has need for statistics . pool do not ! ...

@average-gary
Copy link

so for ex. a farm proxy aggregating to a pool say braiins farm proxy . i do not think they by default try measure farm individual workers (MD) . why in that case try have a farm proxy = aggregate

hashers locally has need for statistics . pool do not ! ...

Hashers often rely on pool for statistics via watcher links and whatnot.

@xraid
Copy link

xraid commented Nov 8, 2024

in a pool dashboard a proxy aggregated connection would look like one workers HR

in hashers local proxy "dashboard" one should be able se aggregated HR to upstream

in hashers local net, hasher have stats of workers dashboards before aggregated to a proxy upstream

a pool only should be concerned with a worker connected (aggregated or not) share submissions at difficulty be able fairly compensate work

a pool with remote workers should not have to have any concern of single workers behind a proxy performing at a HR

@pavlenex
Copy link
Contributor

pavlenex commented Nov 8, 2024

We've talked to several pools and some expressed the need for this, others didn't that's why it's an extension. Let's try to keep discussion focused on the topic, not if the pool needs this or not, because we've done our fair share of researching and it's a need of at least 2 early adopters implementing SV2 in their pool. This isn't just the dashboard issue, certain pools need it for regulatory reasons.

@Fi3
Copy link
Contributor

Fi3 commented Nov 8, 2024

Would be better to put extensions either in their specific repo or in separate directory (for extension) within this repo.


### 1.1 Activate Extension

A client wishing to use this extension sends the `Activate` message. If supported, the pool responds with `Activate.Success`. Otherwise, the client stops further attempts.
Copy link
Contributor

@Fi3 Fi3 Nov 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a global and not extension specific. Every extension have to be activated with the same procedure. Beeing this one or another one. @jakubtrnka

| request_id | U32 | Original request ID |
| user_id | SEQ0_64K[U32]| Worker ID associated |

### `SetUserIdentity.Error` (Server -> Client)
Copy link
Contributor

@Fi3 Fi3 Nov 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What the client do now? I guess only possible thing is close the connection, since there is no procedure to recover from this error. I belive that having specific error message for irrecoverable situation is only cause of confusion. IMO is much better having a general error with a string that we use only for logging. In that case the pool send the error and close the connection, then downstream can log what caused the error.

If there is a way to recover we should have a set of messages to actually do it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the pool should close the connection because it means something is not working properly or client is trying to spam the server. Does it make sense?

@GitGab19
Copy link
Collaborator Author

Would be better to put extensions either in their specific repo or in separate directory (for extension) within this repo.

I already did it. The file is inside a new extensions/ dir

@GitGab19 GitGab19 changed the title Worker specific hashrate tracking extension Worker specific hashrate tracking extension (SOLUTION N. 3) Nov 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants