Skip to content

Conversation

@arnetheduck
Copy link
Member

JSON-RPC is modelled as request-response where each response corresponds to one request, sent in the same order. In addition, batching is possible where several requests can be sent as one batch, receiving one batch of responses in return, but still maintaining the general 1:1 messaging structure.

This PR exploits this 1:1 relationship between request and response to simplify the client implementation and improve efficieny (reduces mem usage from 1.2gb to 800mb to send a 128mb message - still a lot, but quite a bit better) while at the same time cleaning up error handling and moving the JSON-RPC specifics out of the transports so that they're equal and shared between each transport.

The transports themselves now just provide the simple ability to transfer request-response pairs.

In doing so, protocol adherence in edge cases is increased to more closely follow the semantics suggested in the spec.

  • match messages by order, getting rid of unnecessary matching table
  • move all JSON-RPC protocol implementation details such as encoding/decoding etc to clients, avoiding it being spread out and repeated in each transport - this also opens the door to using a different encoding than JSON in the future (ie CBOR)
  • stream-encode requests and use seq[byte] throughout to avoid costly string conversions / copies
  • clean up error raising - in particular, differentiate between transport and protocol errors more clearly and strive to raise similar exceptions in similar situations for each transport
  • add maxMessageSize parameter to each transport, make it work the same
  • remove socket client reconnect loop to match websockets - possibly it should be re-added to both instead in a future PR
  • add raises to async, where relevant
  • order request/response json fields the way they're ordered in the spec
  • this makes it more efficient to parse messages following the same order
  • make parser more spec-compliant, moving some of the validation to an earlier stage in the parsing pipeline
  • limit the length of string-based id fields to avoid having the log spammed
  • stream-write requests, avoiding an extra copy of the parameters
  • use raw async procs where applicable to avoid copies and async overhead

JSON-RPC is modelled as request-response where each response corresponds
to one request, sent in the same order. In addition, batching is
possible where several requests can be sent as one batch, receiving one
batch of responses in return, but still maintaining the general 1:1
messaging structure.

This PR exploits this 1:1 relationship between request and response to
simplify the client implementation and improve efficieny (reduces mem
usage from 1.2gb to 800mb to send a 128mb message - still a lot, but
quite a bit better) while at the same time cleaning up error handling
and moving the JSON-RPC specifics out of the transports so that they're
equal and shared between each transport.

The transports themselves now just provide the simple ability to
transfer request-response pairs.

In doing so, protocol adherence in edge cases is increased to more
closely follow the semantics suggested in the spec.

* match messages by order, getting rid of unnecessary matching table
* move all JSON-RPC protocol implementation details such as
encoding/decoding etc to `clients`, avoiding it being spread out and
repeated in each transport - this also opens the door to using a
different encoding than JSON in the future (ie CBOR)
* stream-encode requests and use `seq[byte]` throughout to avoid costly
`string` conversions / copies
* clean up error raising - in particular, differentiate between
transport and protocol errors more clearly and strive to raise similar
exceptions in similar situations for each transport
* add `maxMessageSize` parameter to each transport, make it work the
same
* remove socket client reconnect loop to match websockets - possibly it
should be re-added to both instead in a future PR
* add `raises` to `async`, where relevant
* order request/response json fields the way they're ordered in the spec
* this makes it more efficient to parse messages following the same
order
* make parser more spec-compliant, moving some of the validation to an
earlier stage in the parsing pipeline
* limit the length of string-based `id` fields to avoid having the log
spammed
* stream-write requests, avoiding an extra copy of the parameters
* use raw async procs where applicable to avoid copies and async
overhead
@arnetheduck arnetheduck requested a review from jangko November 15, 2025 11:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant