-
Notifications
You must be signed in to change notification settings - Fork 390
fix(CallBatchLayer
): don't batch if single request
#2397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(CallBatchLayer
): don't batch if single request
#2397
Conversation
CallBatchLayer
): don't batch if single request
Hello @DaniPopes pushed those changes, do check once free! |
Made those changes and pushed, do check @DaniPopes ! |
let result: TransportResult<IMulticall3::Result> = match msg.kind { | ||
CallBatchMsgKind::Call(tx) => self | ||
.inner | ||
.call(tx) | ||
.await | ||
.map(|res| IMulticall3::Result { success: true, returnData: res }), | ||
CallBatchMsgKind::BlockNumber => { | ||
self.inner.get_block_number().into_future().await.map(|res| { | ||
IMulticall3::Result { success: true, returnData: res.abi_encode().into() } | ||
}) | ||
} | ||
CallBatchMsgKind::ChainId => { | ||
self.inner.get_chain_id().into_future().await.map(|res| IMulticall3::Result { | ||
success: true, | ||
returnData: res.abi_encode().into(), | ||
}) | ||
} | ||
CallBatchMsgKind::Balance(addr) => { | ||
self.inner.get_balance(addr).into_future().await.map(|res| { | ||
IMulticall3::Result { success: true, returnData: res.abi_encode().into() } | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling in the single-call path differs from the batched path. For consistency, errors from direct calls should be wrapped in an IMulticall3::Result
with success=false
rather than being passed through directly. This would ensure that callers experience the same error behavior regardless of whether their request was batched or executed individually.
Consider modifying each match arm to handle errors like:
CallBatchMsgKind::Call(tx) => self
.inner
.call(tx)
.await
.map(|res| IMulticall3::Result { success: true, returnData: res })
.or_else(|err| Ok(IMulticall3::Result {
success: false,
returnData: err.to_string().into_bytes().into()
})),
This approach would maintain consistent behavior between batched and non-batched execution paths.
let result: TransportResult<IMulticall3::Result> = match msg.kind { | |
CallBatchMsgKind::Call(tx) => self | |
.inner | |
.call(tx) | |
.await | |
.map(|res| IMulticall3::Result { success: true, returnData: res }), | |
CallBatchMsgKind::BlockNumber => { | |
self.inner.get_block_number().into_future().await.map(|res| { | |
IMulticall3::Result { success: true, returnData: res.abi_encode().into() } | |
}) | |
} | |
CallBatchMsgKind::ChainId => { | |
self.inner.get_chain_id().into_future().await.map(|res| IMulticall3::Result { | |
success: true, | |
returnData: res.abi_encode().into(), | |
}) | |
} | |
CallBatchMsgKind::Balance(addr) => { | |
self.inner.get_balance(addr).into_future().await.map(|res| { | |
IMulticall3::Result { success: true, returnData: res.abi_encode().into() } | |
}) | |
} | |
let result: TransportResult<IMulticall3::Result> = match msg.kind { | |
CallBatchMsgKind::Call(tx) => self | |
.inner | |
.call(tx) | |
.await | |
.map(|res| IMulticall3::Result { success: true, returnData: res }) | |
.or_else(|err| Ok(IMulticall3::Result { | |
success: false, | |
returnData: err.to_string().into_bytes().into(), | |
})), | |
CallBatchMsgKind::BlockNumber => { | |
self.inner.get_block_number().into_future().await.map(|res| { | |
IMulticall3::Result { success: true, returnData: res.abi_encode().into() } | |
}).or_else(|err| { | |
Ok(IMulticall3::Result { | |
success: false, | |
returnData: err.to_string().into_bytes().into(), | |
}) | |
}) | |
} | |
CallBatchMsgKind::ChainId => { | |
self.inner.get_chain_id().into_future().await.map(|res| IMulticall3::Result { | |
success: true, | |
returnData: res.abi_encode().into(), | |
}).or_else(|err| { | |
Ok(IMulticall3::Result { | |
success: false, | |
returnData: err.to_string().into_bytes().into(), | |
}) | |
}) | |
} | |
CallBatchMsgKind::Balance(addr) => { | |
self.inner.get_balance(addr).into_future().await.map(|res| { | |
IMulticall3::Result { success: true, returnData: res.abi_encode().into() } | |
}).or_else(|err| { | |
Ok(IMulticall3::Result { | |
success: false, | |
returnData: err.to_string().into_bytes().into(), | |
}) | |
}) | |
} |
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
Hello @mattsse @DaniPopes,
Raising a draft PR for #2386 — please let me know if the approach looks good.
I’m not converting to Multicall3 immediately upon receiving a request. Instead, I defer conversion until I’ve intercepted and counted the total number of calls. Based on that count, I either perform a normal RPC call (for a single request) or batch them using Multicall3.
I have numbed testcases as of now, if current approach is correct, I'll take on tc