Skip to content

[don't merge] #31802 with additional patches #90

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: 2025/02/ipc-yea
Choose a base branch
from

Conversation

Sjors
Copy link
Owner

@Sjors Sjors commented Jun 24, 2025

This is a PR to run CI against bitcoin#31802 with additional patches.

Currently:

Update note to self:

git fetch ryanofsky
git reset --hard 2025/02/ipc-yea
git merge ryanofsky/pr/ipc-stop
# If there's any libmultiprocess commits to include:
# https://github.com/bitcoin-core/libmultiprocess/pull/186
# HASH=... 
# git fetch libmultiprocess $HASH
# git subtree merge --prefix=src/ipc/libmultiprocess --squash $HASH
# N= # commits minus 1
# git cherry-pick sjors/2025/06/ipc-yea-plus~N^..sjors/2025/06/ipc-yea-plus

@Sjors
Copy link
Owner Author

Sjors commented Jun 24, 2025

Other than the linter some failures may be spurious because my CI machine needs a reboot. I'll re-run them.

@Sjors
Copy link
Owner Author

Sjors commented Jun 24, 2025

The remaining test failures do seem real, but perhaps they're a result of not fully including bitcoin#32345?

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 35f9fa4 to 8acbe58 Compare June 25, 2025 08:23
@Sjors
Copy link
Owner Author

Sjors commented Jun 25, 2025

I'm now using the full bitcoin#32345 and updated bitcoin-core/libmultiprocess#184 with its linter fix.

I used git merge for the first PR branch, which is why there's commits from master here.

@Sjors
Copy link
Owner Author

Sjors commented Jun 25, 2025

@ryanofsky the previous releases build fails with:

[10:28:02.512] [ 35%] Building CXX object src/ipc/CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o
[10:28:02.513] cd /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc && /usr/bin/ccache /usr/bin/g++-11 -DABORT_ON_FAILED_ASSUME -DDEBUG -DDEBUG_LOCKCONTENTION -DDEBUG_LOCKORDER -DKJ_USE_FIBERS -DRPC_DOC_CHECK -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src -I/ci_container_base/src -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu -I/ci_container_base/src/ipc/libmultiprocess/include -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include -I/ci_container_base/src/univalue/include -isystem /ci_container_base/depends/x86_64-pc-linux-gnu/include -funsigned-char -g2 -O2 -fPIC -fvisibility=hidden -pthread -fno-extended-identifiers -fdebug-prefix-map=/ci_container_base/src=. -fmacro-prefix-map=/ci_container_base/src=. -fstack-reuse=none -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -Wstack-protector -fstack-protector-all -fcf-protection=full -fstack-clash-protection -Werror -Wall -Wextra -Wformat -Wformat-security -Wvla -Wredundant-decls -Wdate-time -Wduplicated-branches -Wduplicated-cond -Wlogical-op -Woverloaded-virtual -Wsuggest-override -Wimplicit-fallthrough -Wunreachable-code -Wundef -Wno-unused-parameter -std=c++20 -MD -MT src/ipc/CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -MF CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o.d -o CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -c /ci_container_base/src/ipc/interfaces.cpp -DBOOST_MULTI_INDEX_ENABLE_SAFE_MODE 
[10:28:06.739] /ci_container_base/src/ipc/interfaces.cpp: In function ‘void ipc::{anonymous}::HandleCtrlC(int)’:
[10:28:06.739] /ci_container_base/src/ipc/interfaces.cpp:35:16: error: ignoring return value of ‘ssize_t write(int, const void*, size_t)’ declared with attribute ‘warn_unused_result’ [-Werror=unused-result]
[10:28:06.739]    35 |     (void)write(STDOUT_FILENO, g_ignore_ctrl_c.data(), g_ignore_ctrl_c.size());
[10:28:06.739]       |           ~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[10:28:06.739] cc1plus: all warnings being treated as errors

https://cirrus-ci.com/task/5168529512595456

The linter found new things to complain about:

[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/configs/default.sh
[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/configs/llvm.sh
[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/scripts/ci.sh
[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/scripts/run.sh
[08:29:25.290] ^---- ⚠️ Failure generated from lint-shell-locale.py

https://cirrus-ci.com/task/5520373233483776

Here's a segfault:

[10:33:38.485] 
./test/ipc_tests.cpp(12): Entering test case "ipc_tests"
[10:33:38.485] 
[10:33:40.585] 139/145 Test #140: wallet_crypto_tests ..................   Passed    4.80 sec
[10:33:41.420] 140/145 Test #115: txrequest_tests ......................   Passed   17.80 sec
[10:33:48.876] 141/145 Test #141: wallet_tests .........................   Passed   11.52 sec
[10:33:52.230] 142/145 Test  #81: random_tests .........................   Passed   40.58 sec
[10:34:31.152] 143/145 Test #131: coinselector_tests ...................   Passed   57.29 sec
[10:35:06.435] 144/145 Test  #10: bench_sanity_check ...................   Passed  134.67 sec
[10:35:24.840] 145/145 Test  #33: coins_tests ..........................   Passed  149.20 sec
[10:35:24.843] 
[10:35:24.843] 99% tests passed, 1 tests failed out of 145
[10:35:24.844] 
[10:35:24.844] Total Test time (real) = 153.12 sec
[10:35:24.844] 
[10:35:24.844] The following tests FAILED:
[10:35:24.845] 	145 - ipc_tests (SEGFAULT)
[10:35:24.845] Errors while running CTest
[10:35:24.885] 
[10:35:24.885] Exit status: 8

https://cirrus-ci.com/task/5731479466016768

@ryanofsky
Copy link

Thanks for testing and finding all those problems. I guess this is an early lesson that bitcoin-core/libmultiprocess#184 will not be sufficient to catch a lot of problems and is not a substitute for running full set of bitcoin CI jobs.

@Sjors
Copy link
Owner Author

Sjors commented Jun 27, 2025

Trying with bitcoin-core/libmultiprocess#186 now (and bitcoin#32345 minus the subtree update).


Same complaint from the linter as above:

[06:35:37.929] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/configs/default.sh
...

I tested locally that bitcoin-core/libmultiprocess@8ac8b4c is enough to quiet the linter.


CentOS still sees a SegFault for ipc_tests: https://cirrus-ci.com/task/6110371448094720

The previous releases build has a warning about an unused result:

[08:59:33.978] cd /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc && /usr/bin/ccache /usr/bin/g++-11 -DABORT_ON_FAILED_ASSUME -DDEBUG -DDEBUG_LOCKCONTENTION -DDEBUG_LOCKORDER -DKJ_USE_FIBERS -DRPC_DOC_CHECK -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src -I/ci_container_base/src -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu -I/ci_container_base/src/ipc/libmultiprocess/include -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include -I/ci_container_base/src/univalue/include -isystem /ci_container_base/depends/x86_64-pc-linux-gnu/include -funsigned-char -g2 -O2 -fPIC -fvisibility=hidden -pthread -fno-extended-identifiers -fdebug-prefix-map=/ci_container_base/src=. -fmacro-prefix-map=/ci_container_base/src=. -fstack-reuse=none -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -Wstack-protector -fstack-protector-all -fcf-protection=full -fstack-clash-protection -Werror -Wall -Wextra -Wformat -Wformat-security -Wvla -Wredundant-decls -Wdate-time -Wduplicated-branches -Wduplicated-cond -Wlogical-op -Woverloaded-virtual -Wsuggest-override -Wimplicit-fallthrough -Wunreachable-code -Wundef -Wno-unused-parameter -std=c++20 -MD -MT src/ipc/CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -MF CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o.d -o CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -c /ci_container_base/src/ipc/interfaces.cpp -DBOOST_MULTI_INDEX_ENABLE_SAFE_MODE 
[08:59:40.173] /ci_container_base/src/ipc/interfaces.cpp: In function ‘void ipc::{anonymous}::HandleCtrlC(int)’:
[08:59:40.173] /ci_container_base/src/ipc/interfaces.cpp:35:16: error: ignoring return value of ‘ssize_t write(int, const void*, size_t)’ declared with attribute ‘warn_unused_result’ [-Werror=unused-result]
[08:59:40.173]    35 |     (void)write(STDOUT_FILENO, g_ignore_ctrl_c.data(), g_ignore_ctrl_c.size());
[08:59:40.173]       |           ~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[08:59:40.173] cc1plus: all warnings being treated as errors

https://cirrus-ci.com/task/5547421494673408

Tidy complains of a deprecated header:

[08:57:34.287] [427/707][9.3s] clang-tidy-20 -p=/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu -quiet -load=/tidy-build/libbitcoin-tidy.so /ci_container_base/src/ipc/interfaces.cpp
[08:57:34.287] /ci_container_base/src/ipc/interfaces.cpp:21:10: error: inclusion of deprecated C++ header 'signal.h'; consider using 'csignal' instead [modernize-deprecated-headers,-warnings-as-errors]
[08:57:34.287]    21 | #include <signal.h>
[08:57:34.287]       |          ^~~~~~~~~~
[08:57:34.287]       |          <csignal>

https://cirrus-ci.com/task/6391846424805376

TSAN found another data race: https://cirrus-ci.com/task/6673321401516032?logs=ci#L3648

@Sjors
Copy link
Owner Author

Sjors commented Jun 27, 2025

Pushed fixes for some of the previous failures.

Still getting a data race (which I didn't try to fix): https://cirrus-ci.com/task/4902764083412992?logs=ci#L3539. It seems to between m_sync_cleanup_fns in Connection::~Connection vs server_connection->onDisconnect([&] { server_connection.reset(); }); in the test.

Previous releases now builds fine, but segfaults during the ipc test: https://cirrus-ci.com/task/6310138966966272

UndefinedBehaviorSanitizer: null-pointer-use:

 6/145 Test   #5: mptest ...............................***Failed    1.01 sec
[ TEST ] test.cpp:108: Call FooInterface methods
LOG0: {mptest-8187/mptest-8187} IPC client first request from current thread, constructing waiter
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.add$Params (a = 1, b = 2)
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #1 FooInterface.add$Params (a = 1, b = 2)
LOG0: {mptest-8187/mptest-8196} IPC server send response #1 FooInterface.add$Results (result = 3)
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.add$Results (result = 3)
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.pass$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #2 FooInterface.pass$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server send response #2 FooInterface.pass$Results (result = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.pass$Results (result = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.raise$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #3 FooInterface.raise$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server send response #3 FooInterface.raise$Results (error = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.raise$Results (error = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.initThreadMap$Params (threadMap = <external capability>)
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client send FooCallback.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #23 FooCallback.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server post request  #23 {mptest-8187/mptest-8187}
LOG0: {mptest-8187/mptest-8196} IPC server send response #23 FooCallback.destroy$Results ()
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client recv FooCallback.destroy$Results ()
LOG0: {mptest-8187/mptest-8196} IPC server send response #21 FooInterface.callbackExtended$Results (result = 12)
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.callbackExtended$Results (result = 12)
LOG0: {mptest-8187/mptest-8196} IPC server destroy N2mp11ProxyServerINS_4test8messages16ExtendedCallbackEEE
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passCustom$Params (arg = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #24 FooInterface.passCustom$Params (arg = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8196} IPC server send response #24 FooInterface.passCustom$Results (result = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passCustom$Results (result = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passEmpty$Params (arg = ())
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #25 FooInterface.passEmpty$Params (arg = ())
LOG0: {mptest-8187/mptest-8196} IPC server send response #25 FooInterface.passEmpty$Results (result = ())
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passEmpty$Results (result = ())
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passMessage$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #26 FooInterface.passMessage$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server send response #26 FooInterface.passMessage$Results (result = (message = "init build read call build"))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passMessage$Results (result = (message = "init build read call build"))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passMutable$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #27 FooInterface.passMutable$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server send response #27 FooInterface.passMutable$Results (arg = (message = "init build pass call return"))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passMutable$Results (arg = (message = "init build pass call return"))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passFn$Params (context = (thread = <external capability>, callbackThread = <external capability>), fn = <external capability>)
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #28 FooInterface.passFn$Params (context = (thread = <external capability>, callbackThread = <external capability>), fn = <external capability>)
LOG0: {mptest-8187/mptest-8196} IPC server post request  #28 {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)}
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client send FooFn.call$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #29 FooFn.call$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server post request  #29 {mptest-8187/mptest-8187}
LOG0: {mptest-8187/mptest-8196} IPC server send response #29 FooFn.call$Results (result = 10)
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client recv FooFn.call$Results (result = 10)
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client destroy N2mp11ProxyClientINS_4test8messages5FooFnEEE
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client send FooFn.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #30 FooFn.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server post request  #30 {mptest-8187/mptest-8187}
LOG0: {mptest-8187/mptest-8196} IPC server send response #30 FooFn.destroy$Results ()
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client recv FooFn.destroy$Results ()
LOG0: {mptest-8187/mptest-8196} IPC server send response #28 FooInterface.passFn$Results (result = 10)
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passFn$[ PASS ] test.cpp:108: Call FooInterface methods (94576 μs)
[ TEST ] test.cpp:195: Call IPC method after client connection is closed
/home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40: runtime error: reference binding to null pointer of type 'Connection'
    #0 0x55f6c823b850 in void mp::clientInvoke<mp::ProxyClient<mp::test::messages::FooInterface>, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::*)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>>(mp::ProxyClient<mp::test::messages::FooInterface>&, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::* const&)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>&&) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-types.h:612:25
    #1 0x55f6c823722a in mp::ProxyClient<mp::test::messages::FooInterface>::add(int, int) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mp/test/foo.capnp.proxy-client.c++:20:5
    #2 0x55f6c81f69df in mp::test::TestCase195::run() /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:204:14
    #3 0x7f5efeb7d0e1  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x50e1) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    #4 0x7f5efeb7d657  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x5657) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    #5 0x7f5efe90c37f in kj::MainBuilder::MainImpl::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x5537f) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    #6 0x7f5efe907499 in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x50499) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    #7 0x7f5efeb7bfcb in main (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x3fcb) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    #8 0x7f5efe3251c9  (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    #9 0x7f5efe32528a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    #10 0x55f6c8108604 in _start (/home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mptest+0x11f604) (BuildId: 1d9786de21dd346184acb11afd6cab2561d8182f)

SUMMARY: UndefinedBehaviorSanitizer: null-pointer-use /home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40

https://github.com/Sjors/bitcoin/actions/runs/15921104847/job/44907875067?pr=90

ARM was just a timeout, which happens a lot on my CI machine.

ryanofsky added a commit to bitcoin-core/libmultiprocess that referenced this pull request Jun 27, 2025
…ence.

3a6db38 ci: rename configs to .bash (Sjors Provoost)
401e0ce ci: add copyright to bash scripts (Sjors Provoost)
e956467 ci: export LC_ALL (Sjors Provoost)

Pull request description:

  Prevents linter issues like reported here: Sjors/bitcoin#90 (comment)

  But probably also just a good idea.

  Also make the use of `#!/usr/bin/env bash` consistent and added a copyright header (mostly for aesthetic reasons).

ACKs for top commit:
  ryanofsky:
    Code review ACK 3a6db38. Thanks!

Tree-SHA512: de4a3fe62126f0f37e9b33c6e443ee07b2968ac8df18b7d91ce482ce63aeab896b216a7efcc638945ef4f83b4429471433c3515b5bbe744fb238bec08792f7af
@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 35caa56 to f3a5285 Compare June 28, 2025 09:23
@Sjors
Copy link
Owner Author

Sjors commented Jun 28, 2025

Updated to use the latest bitcoin-core/libmultiprocess@258b83c

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from f3a5285 to 95de562 Compare June 28, 2025 09:26
@Sjors
Copy link
Owner Author

Sjors commented Jun 28, 2025

CentOS still happily segfaults: https://cirrus-ci.com/task/5925562629226496

As does previous releases: https://cirrus-ci.com/task/5362612675805184

MSan complains of MemorySanitizer: use-of-uninitialized-value: https://cirrus-ci.com/task/4658925234028544

As does ASan: SUMMARY: UndefinedBehaviorSanitizer: null-pointer-use /home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40
https://github.com/Sjors/bitcoin/actions/runs/15942739184/job/44973004692?pr=90

@ryanofsky
Copy link

MSan complains of MemorySanitizer: use-of-uninitialized-value: https://cirrus-ci.com/task/4658925234028544

For this I think following fix should work (not sure about the other failures yet but they might be related):

diff

--- a/test/mp/test/test.cpp
+++ b/test/mp/test/test.cpp
@@ -268,12 +268,12 @@ KJ_TEST("Calling IPC method, disconnecting and blocking during the call")
     // ProxyServer objects associated with the connection. Having an in-progress
     // RPC call requires keeping the ProxyServer longer.
 
+    std::promise<void> signal;
     TestSetup setup{/*client_owns_connection=*/false};
     ProxyClient<messages::FooInterface>* foo = setup.client.get();
     KJ_EXPECT(foo->add(1, 2) == 3);
 
     foo->initThreadMap();
-    std::promise<void> signal;
     setup.server->m_impl->m_fn = [&] {
         EventLoopRef loop{*setup.server->m_context.loop};
         setup.client_disconnect();
@@ -289,6 +289,11 @@ KJ_TEST("Calling IPC method, disconnecting and blocking during the call")
     }
     KJ_EXPECT(disconnected);
 
+    // Now that the disconnect has been detected, set signal allowing the
+    // callFnAsync() IPC call to return. Since signalling may not wake up the
+    // thread right away, it is important for the signal variable to be declared
+    // *before* the TestSetup variable so is available during the entire
+    // TestSetup shutdown process.
     signal.set_value();
 }
 

@ryanofsky
Copy link

Actually the other failures don't look related. Previous releases & centos jobs are failing with a segfault in ipc_tests not mptest, so not the same unit tests. Will probably need to try to reproduce these failures locally in podman and get a stack trace to debug.

The UndefinedBehaviorSanitizer failure is also calling out technically undefined behavior that should be fixed, but not behavior that should cause any problems in practice. It just catches code initializing an unused reference variable from a null pointer.

Probably also would be good to add more CI jobs in libmultiprocess to test the other sanitizers.

So lots more to do here.

@ryanofsky
Copy link

Following should fix UndefinedBehaviorSanitizer:

diff

--- a/include/mp/proxy-types.h
+++ b/include/mp/proxy-types.h
@@ -609,42 +609,44 @@ void clientInvoke(ProxyClient& proxy_client, const GetRequest& get_request, Fiel
             << "{" << g_thread_context.thread_name
             << "} IPC client first request from current thread, constructing waiter";
     }
-    ClientInvokeContext invoke_context{*proxy_client.m_context.connection, g_thread_context};
+    ThreadContext& thread_context{g_thread_context};
+    std::optional<ClientInvokeContext> invoke_context;
     std::exception_ptr exception;
     std::string kj_exception;
     bool done = false;
     const char* disconnected = nullptr;
     proxy_client.m_context.loop->sync([&]() {
         if (!proxy_client.m_context.connection) {
-            const std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
+            const std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
             done = true;
             disconnected = "IPC client method called after disconnect.";
-            invoke_context.thread_context.waiter->m_cv.notify_all();
+            thread_context.waiter->m_cv.notify_all();
             return;
         }
 
         auto request = (proxy_client.m_client.*get_request)(nullptr);
         using Request = CapRequestTraits<decltype(request)>;
         using FieldList = typename ProxyClientMethodTraits<typename Request::Params>::Fields;
-        IterateFields().handleChain(invoke_context, request, FieldList(), typename FieldObjs::BuildParams{&fields}...);
+        invoke_context.emplace(*proxy_client.m_context.connection, thread_context);
+        IterateFields().handleChain(*invoke_context, request, FieldList(), typename FieldObjs::BuildParams{&fields}...);
         proxy_client.m_context.loop->logPlain()
-            << "{" << invoke_context.thread_context.thread_name << "} IPC client send "
+            << "{" << thread_context.thread_name << "} IPC client send "
             << TypeName<typename Request::Params>() << " " << LogEscape(request.toString());
 
         proxy_client.m_context.loop->m_task_set->add(request.send().then(
             [&](::capnp::Response<typename Request::Results>&& response) {
                 proxy_client.m_context.loop->logPlain()
-                    << "{" << invoke_context.thread_context.thread_name << "} IPC client recv "
+                    << "{" << thread_context.thread_name << "} IPC client recv "
                     << TypeName<typename Request::Results>() << " " << LogEscape(response.toString());
                 try {
                     IterateFields().handleChain(
-                        invoke_context, response, FieldList(), typename FieldObjs::ReadResults{&fields}...);
+                        *invoke_context, response, FieldList(), typename FieldObjs::ReadResults{&fields}...);
                 } catch (...) {
                     exception = std::current_exception();
                 }
-                const std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
+                const std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
                 done = true;
-                invoke_context.thread_context.waiter->m_cv.notify_all();
+                thread_context.waiter->m_cv.notify_all();
             },
             [&](const ::kj::Exception& e) {
                 if (e.getType() == ::kj::Exception::Type::DISCONNECTED) {
@@ -652,16 +654,16 @@ void clientInvoke(ProxyClient& proxy_client, const GetRequest& get_request, Fiel
                 } else {
                     kj_exception = kj::str("kj::Exception: ", e).cStr();
                     proxy_client.m_context.loop->logPlain()
-                        << "{" << invoke_context.thread_context.thread_name << "} IPC client exception " << kj_exception;
+                        << "{" << thread_context.thread_name << "} IPC client exception " << kj_exception;
                 }
-                const std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
+                const std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
                 done = true;
-                invoke_context.thread_context.waiter->m_cv.notify_all();
+                thread_context.waiter->m_cv.notify_all();
             }));
     });
 
-    std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
-    invoke_context.thread_context.waiter->wait(lock, [&done]() { return done; });
+    std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
+    thread_context.waiter->wait(lock, [&done]() { return done; });
     if (exception) std::rethrow_exception(exception);
     if (!kj_exception.empty()) proxy_client.m_context.loop->raise() << kj_exception;
     if (disconnected) proxy_client.m_context.loop->raise() << disconnected;

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 95de562 to 9f9c8b1 Compare June 30, 2025 06:16
@Sjors
Copy link
Owner Author

Sjors commented Jun 30, 2025

Updated to the latest commit from bitcoin-core/libmultiprocess@71d3107 and included the above two patches.

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 9f9c8b1 to 2f22406 Compare June 30, 2025 07:08
@Sjors
Copy link
Owner Author

Sjors commented Jun 30, 2025

@ryanofsky
Copy link

I reproduced centos ipc_tests crash locally in podman, and could get a stack trace and fix the bug pretty easily. The fix is:

--- a/src/test/ipc_test.cpp
+++ b/src/test/ipc_test.cpp
@@ -62,7 +62,8 @@ void IpcPipeTest()
         auto connection_client = std::make_unique<mp::Connection>(loop, kj::mv(pipe.ends[0]));
         auto foo_client = std::make_unique<mp::ProxyClient<gen::FooInterface>>(
             connection_client->m_rpc_system->bootstrap(mp::ServerVatId().vat_id).castAs<gen::FooInterface>(),
-            connection_client.release(), /* destroy_connection= */ true);
+            connection_client.get(), /* destroy_connection= */ true);
+        connection_client.release();
         foo_promise.set_value(std::move(foo_client));
 
         auto connection_server = std::make_unique<mp::Connection>(loop, kj::mv(pipe.ends[1]), [&](mp::Connection& connection) {

(and cause is just connection_client.release() causing connection_client->... to segfault depending on what order function arguments are evaluated, which varies by compiler)

I'll try to reproduce the mptest timeout next

@Sjors
Copy link
Owner Author

Sjors commented Jun 30, 2025

@ryanofsky
Copy link

ryanofsky commented Jun 30, 2025

Thanks! The following patch should fix mptest timeouts:

diff

--- a/src/ipc/libmultiprocess/include/mp/type-context.h
+++ b/src/ipc/libmultiprocess/include/mp/type-context.h
@@ -69,7 +69,6 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
                 const auto& params = call_context.getParams();
                 Context::Reader context_arg = Accessor::get(params);
                 ServerContext server_context{server, call_context, req};
-                bool disconnected{false};
                 {
                     // Before invoking the function, store a reference to the
                     // callbackThread provided by the client in the
@@ -101,7 +100,7 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
                     // recursive call (IPC call calling back to the caller which
                     // makes another IPC call), so avoid modifying the map.
                     const bool erase_thread{inserted};
-                    KJ_DEFER({
+                    KJ_DEFER(if (erase_thread) {
                         std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
                         // Call erase here with a Connection* argument instead
                         // of an iterator argument, because the `request_thread`
@@ -112,24 +111,10 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
                         // erases the thread from the map, and also because the
                         // ProxyServer<Thread> destructor calls
                         // request_threads.clear().
-                        if (erase_thread) {
-                            disconnected = !request_threads.erase(server.m_context.connection);
-                        } else {
-                            disconnected = !request_threads.count(server.m_context.connection);
-                        }
+                        request_threads.erase(server.m_context.connection);
                     });
                     fn.invoke(server_context, args...);
                 }
-                if (disconnected) {
-                    // If disconnected is true, the Connection object was
-                    // destroyed during the method call. Deal with this by
-                    // returning without ever fulfilling the promise, which will
-                    // cause the ProxyServer object to leak. This is not ideal,
-                    // but fixing the leak will require nontrivial code changes
-                    // because there is a lot of code assuming ProxyServer
-                    // objects are destroyed before Connection objects.
-                    return;
-                }
                 KJ_IF_MAYBE(exception, kj::runCatchingExceptions([&]() {
                     server.m_context.loop->sync([&] {
                         auto fulfiller_dispose = kj::mv(fulfiller);

I was able to reproduce the problem somewhat reliably by running the "native_asan" CI locally in podman, and running mptest in a loop, and then debug it by getting a stack trace at the point of the hang. I was later able to reproduce the bug much more reliably by adding a sleep call to the "disconnecting and blocking test":

--- a/test/mp/test/test.cpp
+++ b/test/mp/test/test.cpp
@@ -278,6 +278,7 @@ KJ_TEST("Calling IPC method, disconnecting and blocking during the call")
         EventLoopRef loop{*setup.server->m_context.loop};
         setup.client_disconnect();
         signal.get_future().get();
+        sleep(1); // Wait an extra second to test what happens if shutdown proceeds while callFnAsync is still held up,
     };
 
     bool disconnected{false};

So I want to add a new test case to cover this and prevent any regressions.

The fix above just reverts an old workaround introduced in bitcoin-core/libmultiprocess#118 which is no longer necessary after bitcoin-core/libmultiprocess@315ff53 from bitcoin-core/libmultiprocess#160

ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Jul 1, 2025
Fix MemorySanitizer: use-of-uninitialized-value error in "disconnecting and
blocking" promise::get_future call reported:

- Sjors/bitcoin#90 (comment)
- https://cirrus-ci.com/task/4658925234028544and

and fixed:

- Sjors/bitcoin#90 (comment)

An issue exists to add an MSAN CI job to catch errors like this more quickly in
the future bitcoin-core#188

Error looks like:

[ TEST ] test.cpp:251: Calling IPC method, disconnecting and blocking during the call
...
MemorySanitizer: use-of-uninitialized-value
    #0 0x7f83ecb19853 in std::__1::promise<void>::get_future() /msan/llvm-project/libcxx/src/future.cpp:154:16
    bitcoin-core#1 0x55bf563af03c in mp::test::TestCase251::run()::$_0::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:280:16
    bitcoin-core#2 0x55bf563af03c in decltype(std::declval<mp::test::TestCase251::run()::$_0&>()()) std::__1::__invoke[abi:de200100]<mp::test::TestCase251::run()::$_0&>(mp::test::TestCase251::run()::$_0&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:179:25
    bitcoin-core#3 0x55bf563af03c in void std::__1::__invoke_void_return_wrapper<void, true>::__call[abi:de200100]<mp::test::TestCase251::run()::$_0&>(mp::test::TestCase251::run()::$_0&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:251:5
    bitcoin-core#4 0x55bf563af03c in void std::__1::__invoke_r[abi:de200100]<void, mp::test::TestCase251::run()::$_0&>(mp::test::TestCase251::run()::$_0&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:273:10
    bitcoin-core#5 0x55bf563af03c in std::__1::__function::__alloc_func<mp::test::TestCase251::run()::$_0, std::__1::allocator<mp::test::TestCase251::run()::$_0>, void ()>::operator()[abi:de200100]() /msan/cxx_build/include/c++/v1/__functional/function.h:167:12
    bitcoin-core#6 0x55bf563af03c in std::__1::__function::__func<mp::test::TestCase251::run()::$_0, std::__1::allocator<mp::test::TestCase251::run()::$_0>, void ()>::operator()() /msan/cxx_build/include/c++/v1/__functional/function.h:319:10
    bitcoin-core#7 0x55bf565f1b25 in std::__1::__function::__value_func<void ()>::operator()[abi:de200100]() const /msan/cxx_build/include/c++/v1/__functional/function.h:436:12
    bitcoin-core#8 0x55bf565f1b25 in std::__1::function<void ()>::operator()() const /msan/cxx_build/include/c++/v1/__functional/function.h:995:10
    bitcoin-core#9 0x55bf565f1b25 in mp::test::FooImplementation::callFnAsync() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/foo.h:83:40
    bitcoin-core#10 0x55bf565f1b25 in decltype(auto) mp::ProxyMethodTraits<mp::test::messages::FooInterface::CallFnAsyncParams, void>::invoke<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>>(mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy.h:288:16
    bitcoin-core#11 0x55bf565f1b25 in decltype(auto) mp::ServerCall::invoke<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>>(mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::TypeList<>) const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-types.h:448:16
    bitcoin-core#12 0x55bf565f1b25 in std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:121:24
    bitcoin-core#13 0x55bf565f12d1 in kj::Function<void ()>::Impl<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:142:14
    bitcoin-core#14 0x55bf5641125e in kj::Function<void ()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:119:12
    bitcoin-core#15 0x55bf5641125e in void mp::Unlock<std::__1::unique_lock<std::__1::mutex>, kj::Function<void ()>&>(std::__1::unique_lock<std::__1::mutex>&, kj::Function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/util.h:198:5
    bitcoin-core#16 0x55bf5667e45b in void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:294:17
    bitcoin-core#17 0x55bf5667e45b in void std::__1::condition_variable::wait<void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /msan/cxx_build/include/c++/v1/__condition_variable/condition_variable.h:146:11
    bitcoin-core#18 0x55bf5667e45b in void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:285:14
    bitcoin-core#19 0x55bf5667e45b in mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:404:34
    bitcoin-core#20 0x55bf5667e45b in decltype(std::declval<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>()()) std::__1::__invoke[abi:de200100]<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:179:25
    bitcoin-core#21 0x55bf5667e45b in void std::__1::__thread_execute[abi:de200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>&, std::__1::__tuple_indices<...>) /msan/cxx_build/include/c++/v1/__thread/thread.h:199:3
    bitcoin-core#22 0x55bf5667e45b in void* std::__1::__thread_proxy[abi:de200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>>(void*) /msan/cxx_build/include/c++/v1/__thread/thread.h:208:3
    bitcoin-core#23 0x7f83ec69caa3  (/lib/x86_64-linux-gnu/libc.so.6+0x9caa3) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#24 0x7f83ec729c3b  (/lib/x86_64-linux-gnu/libc.so.6+0x129c3b) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
[11:46:33.109]
  Member fields were destroyed
    #0 0x55bf56348abd in __sanitizer_dtor_callback_fields /msan/llvm-project/compiler-rt/lib/msan/msan_interceptors.cpp:1044:5
    bitcoin-core#1 0x7f83ecb196de in ~promise /msan/cxx_build/include/c++/v1/future:1341:22
    bitcoin-core#2 0x7f83ecb196de in std::__1::promise<void>::~promise() /msan/llvm-project/libcxx/src/future.cpp:151:1
    bitcoin-core#3 0x55bf563adb36 in mp::test::TestCase251::run() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:293:1
    bitcoin-core#4 0x55bf5669252e in kj::TestRunner::run()::'lambda'()::operator()() const /usr/src/kj/test.c++:318:11
    bitcoin-core#5 0x55bf5669252e in kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::TestRunner::run()::'lambda'()>(kj::TestRunner::run()::'lambda'()&&) /usr/src/kj/exception.h:371:5
    bitcoin-core#6 0x55bf5669071e in kj::TestRunner::run() /usr/src/kj/test.c++:318:11
    bitcoin-core#7 0x55bf5668f977 in auto kj::TestRunner::getMain()::'lambda5'(auto&, auto&&...)::operator()<kj::TestRunner>(auto&, auto&&...) /usr/src/kj/test.c++:217:27
    bitcoin-core#8 0x55bf5668f977 in auto kj::_::BoundMethod<kj::TestRunner&, kj::TestRunner::getMain()::'lambda5'(auto&, auto&&...), kj::TestRunner::getMain()::'lambda6'(auto&, auto&&...)>::operator()<>() /usr/src/kj/function.h:263:12
    bitcoin-core#9 0x55bf5668f977 in kj::Function<kj::MainBuilder::Validity ()>::Impl<kj::_::BoundMethod<kj::TestRunner&, kj::TestRunner::getMain()::'lambda5'(auto&, auto&&...), kj::TestRunner::getMain()::'lambda6'(auto&, auto&&...)>>::operator()() /usr/src/kj/function.h:142:14
    bitcoin-core#10 0x55bf56b7362c in kj::Function<kj::MainBuilder::Validity ()>::operator()() /usr/src/kj/function.h:119:12
    bitcoin-core#11 0x55bf56b7362c in kj::MainBuilder::MainImpl::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) /usr/src/kj/main.c++:623:5
    bitcoin-core#12 0x55bf56b8865c in kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>::Impl<kj::MainBuilder::MainImpl>::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) /usr/src/kj/function.h:142:14
    bitcoin-core#13 0x55bf56b6a592 in kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) /usr/src/kj/function.h:119:12
    bitcoin-core#14 0x55bf56b6a592 in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**)::$_0::operator()() const /usr/src/kj/main.c++:228:5
    bitcoin-core#15 0x55bf56b6a592 in kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**)::$_0>(kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**)::$_0&&) /usr/src/kj/exception.h:371:5
    bitcoin-core#16 0x55bf56b69b5d in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**) /usr/src/kj/main.c++:228:5
    bitcoin-core#17 0x55bf5668be8f in main /usr/src/kj/test.c++:381:1
    bitcoin-core#18 0x7f83ec62a1c9  (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#19 0x7f83ec62a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#20 0x55bf5630b374 in _start (/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/mptest+0x77374)
[11:46:33.109]
SUMMARY: MemorySanitizer: use-of-uninitialized-value /msan/llvm-project/libcxx/src/future.cpp:154:16 in std::__1::promise<void>::get_future()
Exiting
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Jul 1, 2025
Fix UndefinedBehaviorSanitizer: null-pointer-use error binding a null pointer
to an unused reference variable. This error should not cause problems in
practice because the reference is not used, but is technically undefined
behavior. Issue was reported:

- Sjors/bitcoin#90 (comment)
- https://github.com/Sjors/bitcoin/actions/runs/15921104847/job/44907875067?pr=90
- Sjors/bitcoin#90 (comment)
- https://github.com/Sjors/bitcoin/actions/runs/15942739184/job/44973004692?pr=90

and fixed:

- Sjors/bitcoin#90 (comment)

Error looks like:

[ TEST ] test.cpp:197: Call IPC method after client connection is closed
/home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40: runtime error: reference binding to null pointer of type 'Connection'
    #0 0x5647ad32fc50 in void mp::clientInvoke<mp::ProxyClient<mp::test::messages::FooInterface>, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::*)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>>(mp::ProxyClient<mp::test::messages::FooInterface>&, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::* const&)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>&&) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-types.h:612:25
    bitcoin-core#1 0x5647ad32b62a in mp::ProxyClient<mp::test::messages::FooInterface>::add(int, int) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mp/test/foo.capnp.proxy-client.c++:20:5
    bitcoin-core#2 0x5647ad2eb9ef in mp::test::TestCase197::run() /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:206:14
    bitcoin-core#3 0x7f6aaf7100e1  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x50e1) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    bitcoin-core#4 0x7f6aaf710657  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x5657) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    bitcoin-core#5 0x7f6aaf49f37f in kj::MainBuilder::MainImpl::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x5537f) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    bitcoin-core#6 0x7f6aaf49a499 in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x50499) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    bitcoin-core#7 0x7f6aaf70efcb in main (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x3fcb) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    bitcoin-core#8 0x7f6aaeeb81c9  (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#9 0x7f6aaeeb828a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#10 0x5647ad1fd614 in _start (/home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mptest+0x11d614) (BuildId: a172f78701ced3f93ad52c3562f181811a1c98e8)

SUMMARY: UndefinedBehaviorSanitizer: null-pointer-use /home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40
ryanofsky added 7 commits July 1, 2025 15:07
Use EventLoopRef to avoid reference counting bugs and be more exception safe
and deal with removal of addClient/removeClient methods in
bitcoin-core/libmultiprocess#160

A test update is also required due to
bitcoin-core/libmultiprocess#160 to deal with changed
reference count semantics. In IpcPipeTest(), it is now necessary to destroy
the client Proxy object instead of just the client Connection object to
decrease the event loop reference count and allow the loop to exit so the test
does not hang on shutdown.
Currently this code is not called in unit tests. Calling should make it
possible to write tests for things like IPC exceptions being thrown during
shutdown.
This fixes behavior reported by Antoine Poinsot <[email protected]>
bitcoin#29409 (comment) where if
an IPC client is connected, the node will wait forever for it to disconnect
before exiting.
Fix some comments that were referring to previous versions of these methods and
did not make sense.
This fixes an error reported by Antoine Poinsot <[email protected]> in
bitcoin-core/libmultiprocess#123 that does not happen
in master, but does happen with bitcoin#10102
applied, where if Ctrl-C is pressed when `bitcoin-node` is started, it is
handled by both `bitcoin-node` and `bitcoin-wallet` processes, causing the
wallet to shutdown abruptly instead of waiting for the node and shutting down
cleanly.

This change fixes the problem by having the wallet process print to stdout when
it receives a Ctrl-C signal but not otherwise react, letting the node shut
everything down cleanly.
This fixes an error reported by Antoine Poinsot <[email protected]> in
bitcoin-core/libmultiprocess#123 that does not happen
in master, but does happen with bitcoin#10102
applied, where if the child bitcoin-wallet process is killed (either by an
external signal or by Ctrl-C as reported in the issue) the bitcoin-node process
will not shutdown cleanly after that because chain client stop()
calls will fail.

This change fixes the problem by handling ipc::Exception errors thrown during
the stop() calls, and it relies on the fixes to disconnect detection
implemented in bitcoin-core/libmultiprocess#160 to work
effectively.
ryanofsky added a commit to bitcoin-core/libmultiprocess that referenced this pull request Jul 1, 2025
6f340a5 doc: fix DrahtBot LLM Linter error (Ryan Ofsky)
c6f7fdf type-context: revert client disconnect workaround (Ryan Ofsky)
e09143d proxy-types: fix UndefinedBehaviorSanitizer: null-pointer-use (Ryan Ofsky)
84b292f mptest: fix MemorySanitizer: use-of-uninitialized-value (Ryan Ofsky)
fe4a188 proxy-io: fix race conditions in disconnect callback code (Ryan Ofsky)
d8011c8 proxy-io: fix race conditions in ProxyClientBase cleanup handler (Ryan Ofsky)
97e82ce doc: Add note about Waiter::m_mutex and interaction with the EventLoop::m_mutex (Ryan Ofsky)
81d58f5 refactor: Rename ProxyClient cleanup_it variable (Ryan Ofsky)
07230f2 refactor: rename ProxyClient<Thread>::m_cleanup_it (Ryan Ofsky)
0d986ff mptest: fix race condition in TestSetup constructor (Ryan Ofsky)
d2f6aa2 ci: add thread sanitizer job (Ryan Ofsky)

Pull request description:

  Recently merged PR #160 expanded unit tests to cover various unclean disconnection scenarios, but the new unit tests cause failures in bitcoin CI, despite passing in local CI (which doesn't test as many sanitizers and platforms). Some of the errors are just test bugs, but others are real library bugs and race conditions.

  The bugs were reported in two threads starting Sjors/bitcoin#90 (comment) and bitcoin/bitcoin#32345 (comment), and they are described in detail in individual commit messages in this PR. The changes here fix all the known bugs and add new CI jobs and tests to detect them and catch regressions.

ACKs for top commit:
  Sjors:
    re-ACK 6f340a5

Tree-SHA512: 20aa1992080a0329739d663edb636f218e88d521b17cd66c328051629c8efea802c0ac52a44d51cd58cfe60cc6beb6cdd4a2afa00a0ce36801724540f9e43d42
@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 5066d22 to 698fa06 Compare July 2, 2025 07:34
@Sjors
Copy link
Owner Author

Sjors commented Jul 2, 2025

@ryanofsky ci is happy, but I don't think you changed anything since the last run? So the previous TSAN failure may still be worth investigatig: #90 (comment)

@ryanofsky
Copy link

@ryanofsky ci is happy, but I don't think you changed anything since the last run? So the previous TSAN failure may still be worth investigatig: #90 (comment)

tl;dr This seems like a real bug and I have fix below but it is messy and I want to clean it up.


Looking into this it, it does seem like a real error. If I build mptest binary with tsan and run it in a loop for a few minutes I can reproduce a similar error.

From the stack trace looks like there is still a race condition in the ~ProxyClient<Thread> destructor on line 339:

   333  ProxyClient<Thread>::~ProxyClient()
   334  {
   335      // If thread is being destroyed before connection is destroyed, remove the
   336      // cleanup callback that was registered to handle the connection being
   337      // destroyed before the thread being destroyed.
   338      if (m_disconnect_cb) {
   339          m_context.connection->removeSyncCleanup(*m_disconnect_cb);
   340      }
   341  }

All the variables there should be protected by mutexes:

  • m_disconnect_cb is protected by Waiter::m_mutex and this function is called with that mutex locked.
  • The list of cleanups that removeSyncCleanup modifies is protected by the EventLoop::m_mutex and it locks that mutex internally before modifying it.
  • The m_context.connection pointer is also safe to use because the Connection object won't be destroyed until it calls the disconnect callback m_disconnect_cb in its destructor, and that callback internally locks Waiter::m_mutex and calls m_disconnect_cb.reset().

But there is still a race because there is an instant in time when the ~Connection destructor is about to call m_disconnect_cb but hasn't started the call yet so EventLoop::m_mutex and Waiter::m_mutex are both not locked. This happens in the ~Connection destructor on line 139:

   135      Lock lock{m_loop->m_mutex};
   136      while (!m_sync_cleanup_fns.empty()) {
   137          CleanupList fn;
   138          fn.splice(fn.begin(), m_sync_cleanup_fns, m_sync_cleanup_fns.begin());
   139          Unlock(lock, fn.front());
   140      }

If timing is unlucky, the ~ProxyClient<Thread> destructor above could be called from another thread at this point, so m_disconnect_cb would be an invalid iterator (because of the splice call on line 138), so the removeSyncCleanup call on line 339 would probably segfault.

That particular race is not happening in the tsan trace. The tsan trace shows the ProxyClient<Thread> destructor running before the Connection destructor instead of the other way around, but tsan is correctly detecting that a race like this could happen based on the locks that are present. The following change seems to fix this problem when I run mptest in a loop but is a pretty messy solution that I'd want to simplify:

diff

--- a/include/mp/proxy-io.h
+++ b/include/mp/proxy-io.h
@@ -78,6 +78,10 @@ struct ProxyClient<Thread> : public ProxyClientBase<Thread, ::capnp::Void>
     //! Since this variable is accessed from multiple threads, accesses should
     //! be guarded with the associated Waiter::m_mutex.
     std::optional<CleanupIt> m_disconnect_cb;
+    //! State shared with disconnect callback telling it if this ProxyClient is
+    //! already destroyed and no longer should be accessed. This is also
+    //! accessed from multiple threads and guarded with Waiter::m_mutex.
+    std::shared_ptr<bool> m_destroyed{std::make_shared<bool>(false)};
 };
 
 template <>
@@ -343,6 +347,7 @@ public:
     //! any new i/o.
     CleanupIt addSyncCleanup(std::function<void()> fn);
     void removeSyncCleanup(CleanupIt it);
+    void removeSyncCleanup(CleanupIt it, const Lock& lock);
 
     //! Add disconnect handler.
     template <typename F>
--- a/include/mp/util.h
+++ b/include/mp/util.h
@@ -173,7 +173,7 @@ public:
     ~Lock() MP_RELEASE() = default;
     void unlock() MP_RELEASE() { m_lock.unlock(); }
     void lock() MP_ACQUIRE() { m_lock.lock(); }
-    void assert_locked(Mutex& mutex) MP_ASSERT_CAPABILITY() MP_ASSERT_CAPABILITY(mutex)
+    void assert_locked(Mutex& mutex) const MP_ASSERT_CAPABILITY() MP_ASSERT_CAPABILITY(mutex)
     {
         assert(m_lock.mutex() == &mutex.m_mutex);
         assert(m_lock);
--- a/src/mp/proxy.cpp
+++ b/src/mp/proxy.cpp
@@ -134,9 +134,14 @@ Connection::~Connection()
     // callbacks. In the clean shutdown case both lists will be empty.
     Lock lock{m_loop->m_mutex};
     while (!m_sync_cleanup_fns.empty()) {
-        CleanupList fn;
-        fn.splice(fn.begin(), m_sync_cleanup_fns, m_sync_cleanup_fns.begin());
-        Unlock(lock, fn.front());
+        // Call the first function in the connection cleanup list. Before
+        // calling it, move it into a temporary variable so outside classes
+        // which registered for disconnect callbacks can check if the
+        // disconnection function is null and know if it's about to be called.
+        auto it{m_sync_cleanup_fns.begin()};
+        std::function<void()> fn = std::move(*it);
+        Unlock(lock, fn);
+        m_sync_cleanup_fns.erase(it);
     }
 }
 
@@ -157,6 +162,12 @@ CleanupIt Connection::addSyncCleanup(std::function<void()> fn)
 void Connection::removeSyncCleanup(CleanupIt it)
 {
     const Lock lock(m_loop->m_mutex);
+    removeSyncCleanup(it, lock);
+}
+
+void Connection::removeSyncCleanup(CleanupIt it, const Lock& lock)
+{
+    lock.assert_locked(m_loop->m_mutex);
     m_sync_cleanup_fns.erase(it);
 }
 
@@ -313,7 +324,7 @@ std::tuple<ConnThread, bool> SetThread(ConnThreads& threads, std::mutex& mutex,
     thread = threads.emplace(
         std::piecewise_construct, std::forward_as_tuple(connection),
         std::forward_as_tuple(make_thread(), connection, /* destroy_connection= */ false)).first;
-    thread->second.setDisconnectCallback([&threads, &mutex, thread] {
+    thread->second.setDisconnectCallback([&threads, &mutex, thread, destroyed = thread->second.m_destroyed] {
         // Note: it is safe to use the `thread` iterator in this cleanup
         // function, because the iterator would only be invalid if the map entry
         // was removed, and if the map entry is removed the ProxyClient<Thread>
@@ -324,6 +335,7 @@ std::tuple<ConnThread, bool> SetThread(ConnThreads& threads, std::mutex& mutex,
         // try to unregister this callback after connection is destroyed.
         // Remove connection pointer about to be destroyed from the map
         const std::unique_lock<std::mutex> lock(mutex);
+        if (*destroyed) return;
         thread->second.m_disconnect_cb.reset();
         threads.erase(thread);
     });
@@ -332,12 +344,18 @@ std::tuple<ConnThread, bool> SetThread(ConnThreads& threads, std::mutex& mutex,
 
 ProxyClient<Thread>::~ProxyClient()
 {
+    // Waiter::m_mutex is already held here so it is safe to access
+    // m_disconnect_cb EventLoop::mutex needs to be locked to access
+    // **m_disconnect_cb, which Connection destructor will set to null before it
+    // invokes the callback.
+    const Lock lock(m_context.loop->m_mutex);
     // If thread is being destroyed before connection is destroyed, remove the
     // cleanup callback that was registered to handle the connection being
     // destroyed before the thread being destroyed.
-    if (m_disconnect_cb) {
-        m_context.connection->removeSyncCleanup(*m_disconnect_cb);
+    if (m_disconnect_cb && **m_disconnect_cb) {
+        m_context.connection->removeSyncCleanup(*m_disconnect_cb, lock);
     }
+    *m_destroyed = true;
 }
 
 void ProxyClient<Thread>::setDisconnectCallback(const std::function<void()>& fn)

For reference here is also the stack trace from https://cirrus-ci.com/task/5573287230570496 annotated with relevant code

trace

WARNING: ThreadSanitizer: data race (pid=8333)
  Write of size 8 at 0x721000003e90 by thread T11 (mutexes: write M0, write M1):
    #0 operator delete(void*, unsigned long) <null> (mptest+0x12b1dc) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #1 void std::__1::__libcpp_operator_delete[abi:ne200100]<std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long>(std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__new/allocate.h:46:3 (mptest+0x235783) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #2 void std::__1::__libcpp_deallocate[abi:ne200100]<std::__1::__list_node<std::__1::function<void ()>, void*>>(std::__1::__type_identity<std::__1::__list_node<std::__1::function<void ()>, void*>>::type*, std::__1::__element_count, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__new/allocate.h:86:12 (mptest+0x235783)
    #3 std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>::deallocate[abi:ne200100](std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator.h:120:7 (mptest+0x235783)
    #4 std::__1::allocator_traits<std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>>::deallocate[abi:ne200100](std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>&, std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator_traits.h:302:9 (mptest+0x235783)
    #5 std::__1::__list_imp<std::__1::function<void ()>, std::__1::allocator<std::__1::function<void ()>>>::__delete_node[abi:ne200100](std::__1::__list_node<std::__1::function<void ()>, void*>*) /usr/lib/llvm-20/bin/../include/c++/v1/list:571:5 (mptest+0x235783)
    #6 std::__1::list<std::__1::function<void ()>, std::__1::allocator<std::__1::function<void ()>>>::erase(std::__1::__list_const_iterator<std::__1::function<void ()>, void*>) /usr/lib/llvm-20/bin/../include/c++/v1/list:1342:9 (mptest+0x235783)
    #7 mp::Connection::removeSyncCleanup(std::__1::__list_iterator<std::__1::function<void ()>, void*>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:160:24 (mptest+0x235783)

   157  void Connection::removeSyncCleanup(CleanupIt it)
   158  {
   159      const Lock lock(m_loop->m_mutex);
   160      m_sync_cleanup_fns.erase(it);
   161  }

    #8 mp::ProxyClient<mp::Thread>::~ProxyClient() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:339:31 (mptest+0x235783)

   333  ProxyClient<Thread>::~ProxyClient()
   334  {
   335      // If thread is being destroyed before connection is destroyed, remove the
   336      // cleanup callback that was registered to handle the connection being
   337      // destroyed before the thread being destroyed.
   338      if (m_disconnect_cb) {
   339          m_context.connection->removeSyncCleanup(*m_disconnect_cb);
   340      }
   341  }

    #9 std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>::~pair() /usr/lib/llvm-20/bin/../include/c++/v1/__utility/pair.h:63:29 (mptest+0x1d2e8d) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #10 void std::__1::__destroy_at[abi:ne200100]<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>, 0>(std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/construct_at.h:66:11 (mptest+0x1d2e8d)
    #11 void std::__1::allocator_traits<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>>>::destroy[abi:ne200100]<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>, void, 0>(std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>>&, std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator_traits.h:329:5 (mptest+0x1d2e8d)
    #12 std::__1::__tree<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__map_value_compare<mp::Connection*, std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::less<mp::Connection*>, true>, std::__1::allocator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>>>::erase(std::__1::__tree_const_iterator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>*, long>) /usr/lib/llvm-20/bin/../include/c++/v1/__tree:2047:3 (mptest+0x1d2e8d)
    #13 unsigned long std::__1::__tree<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__map_value_compare<mp::Connection*, std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::less<mp::Connection*>, true>, std::__1::allocator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>>>::__erase_unique<mp::Connection*>(mp::Connection* const&) /usr/lib/llvm-20/bin/../include/c++/v1/__tree:2067:3 (mptest+0x1d2e8d)
    #14 std::__1::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::__1::less<mp::Connection*>, std::__1::allocator<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>>>::erase[abi:ne200100](mp::Connection* const&) /usr/lib/llvm-20/bin/../include/c++/v1/map:1320:79 (mptest+0x205841) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #15 std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:103:21 (mptest+0x205841)

   103                      KJ_DEFER(if (erase_thread) {
   104                          std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
   105                          // Call erase here with a Connection* argument instead
   106                          // of an iterator argument, because the `request_thread`
   107                          // iterator may be invalid if the connection is closed
   108                          // during this function call. More specifically, the
   109                          // iterator may be invalid because SetThread adds a
   110                          // cleanup callback to the Connection destructor that
   111                          // erases the thread from the map, and also because the
   112                          // ProxyServer<Thread> destructor calls
   113                          // request_threads.clear().
   114                          request_threads.erase(server.m_context.connection);
   115                      });

    #16 kj::_::Deferred<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()>::run() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/common.h:2010:7 (mptest+0x205841)
    #17 kj::_::Deferred<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()>::~Deferred() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/common.h:1999:5 (mptest+0x205841)
    #18 std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:117:17 (mptest+0x205841)
    #19 kj::Function<void ()>::Impl<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:142:14 (mptest+0x20552f) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #20 kj::Function<void ()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:119:12 (mptest+0x154867) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #21 void mp::Unlock<std::__1::unique_lock<std::__1::mutex>, kj::Function<void ()>&>(std::__1::unique_lock<std::__1::mutex>&, kj::Function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/util.h:198:5 (mptest+0x154867)

   194  template <typename Lock, typename Callback>
   195  void Unlock(Lock& lock, Callback&& callback)
   196  {
   197      const UnlockGuard<Lock> unlock(lock);
   198      callback();
   199  }

    #22 void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:296:17 (mptest+0x237d69) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   284      template <class Predicate>
   285      void wait(std::unique_lock<std::mutex>& lock, Predicate pred)
   286      {
   287          m_cv.wait(lock, [&] {
   288              // Important for this to be "while (m_fn)", not "if (m_fn)" to avoid
   289              // a lost-wakeup bug. A new m_fn and m_cv notification might be sent
   290              // after the fn() call and before the lock.lock() call in this loop
   291              // in the case where a capnp response is sent and a brand new
   292              // request is immediately received.
   293              while (m_fn) {
   294                  auto fn = std::move(*m_fn);
   295                  m_fn.reset();
   296                  Unlock(lock, fn);
   297              }
   298              const bool done = pred();
   299              return done;
   300          });
   301      }

    #23 void std::__1::condition_variable::wait<void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /usr/lib/llvm-20/bin/../include/c++/v1/__condition_variable/condition_variable.h:146:11 (mptest+0x237d69)
    #24 void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:287:14 (mptest+0x237d69)
    #25 mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:404:34 (mptest+0x237d69)

   404          g_thread_context.waiter->wait(lock, [] { return !g_thread_context.waiter; });

    #26 decltype(std::declval<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>()()) std::__1::__invoke[abi:ne200100]<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x237d69)
    #27 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x237d69)
    #28 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x237d69)

  Previous read of size 8 at 0x721000003e90 by thread T10:
    #0 std::__1::__function::__value_func<void ()>::operator()[abi:ne200100]() const /usr/lib/llvm-20/bin/../include/c++/v1/__functional/function.h:436:12 (mptest+0x238638) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #1 std::__1::function<void ()>::operator()() const /usr/lib/llvm-20/bin/../include/c++/v1/__functional/function.h:995:10 (mptest+0x238638)
    #2 void mp::Unlock<mp::Lock, std::__1::function<void ()>&>(mp::Lock&, std::__1::function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/util.h:198:5 (mptest+0x238638)

   194  template <typename Lock, typename Callback>
   195  void Unlock(Lock& lock, Callback&& callback)
   196  {
   197      const UnlockGuard<Lock> unlock(lock);
   198      callback();
   199  }

    #3 mp::Connection::~Connection() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:139:9 (mptest+0x232bcb) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   135      Lock lock{m_loop->m_mutex};
   136      while (!m_sync_cleanup_fns.empty()) {
   137          CleanupList fn;
   138          fn.splice(fn.begin(), m_sync_cleanup_fns, m_sync_cleanup_fns.begin());
   139          Unlock(lock, fn.front());
   140      }

    #4 std::__1::default_delete<mp::Connection>::operator()[abi:ne200100](mp::Connection*) const /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:78:5 (mptest+0x13742b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #5 std::__1::unique_ptr<mp::Connection, std::__1::default_delete<mp::Connection>>::reset[abi:ne200100](mp::Connection*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:300:7 (mptest+0x13742b)
    #6 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:79:71 (mptest+0x13742b)

    79                server_connection->onDisconnect([&] { server_connection.reset(); });

    #7 kj::_::Void kj::_::MaybeVoidCaller<kj::_::Void, kj::_::Void>::apply<mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()&, kj::_::Void&&) /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/async-prelude.h:195:5 (mptest+0x13742b)
    #8 kj::_::TransformPromiseNode<kj::_::Void, kj::_::Void, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'(), kj::_::PropagateException>::getImpl(kj::_::ExceptionOrValue&) /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/async-inl.h:739:31 (mptest+0x13742b)
    #9 kj::_::TransformPromiseNodeBase::get(kj::_::ExceptionOrValue&) <null> (mptest+0x2df64c) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #10 mp::EventLoop::loop() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:231:68 (mptest+0x234699) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   231          const size_t read_bytes = wait_stream->read(&buffer, 0, 1).wait(m_io_context.waitScope);

    #11 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:92:20 (mptest+0x133819) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #12 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #13 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    #14 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Mutex M0 (0x721c00003790) created at:
    #0 pthread_mutex_lock <null> (mptest+0xa844b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #1 std::__1::mutex::lock() <null> (libc++.so.1.0.20+0x713cc) (BuildId: 30ef7da36db6fb0c014ee96603f7649f755cb793)

  Mutex M1 (0x7fbbe45fd4e0) created at:
    #0 pthread_mutex_lock <null> (mptest+0xa844b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #1 std::__1::mutex::lock() <null> (libc++.so.1.0.20+0x713cc) (BuildId: 30ef7da36db6fb0c014ee96603f7649f755cb793)
    #2 mp::Connection::Connection(mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>&&, std::__1::function<capnp::Capability::Client (mp::Connection&)> const&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-io.h:330:11 (mptest+0x134cfb) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #3 std::__1::unique_ptr<mp::Connection, std::__1::default_delete<mp::Connection>> std::__1::make_unique[abi:ne200100]<mp::Connection, mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda'(mp::Connection&), 0>(mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>&&, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda'(mp::Connection&)&&) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:767:30 (mptest+0x1333f9) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #4 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:70:19 (mptest+0x1333f9)
    #5 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #6 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    #7 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Thread T11 (tid=8378, running) created by thread T10 at:
    #0 pthread_create <null> (mptest+0xa673e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #1 std::__1::__libcpp_thread_create[abi:ne200100](unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/support/pthread.h:182:10 (mptest+0x236338) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #2 std::__1::thread::thread<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0, 0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:218:14 (mptest+0x236338)
    #3 mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:397:17 (mptest+0x236338)
    #4 mp::ThreadMap::Server::dispatchCallInternal(unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++:602:9 (mptest+0x231af8) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #5 mp::ThreadMap::Server::dispatchCall(unsigned long, unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++:591:14 (mptest+0x231af8)
    #6 virtual thunk to mp::ThreadMap::Server::dispatchCall(unsigned long, unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++ (mptest+0x231af8)
    #7 capnp::LocalClient::callInternal(unsigned long, unsigned short, capnp::CallContextHook&) <null> (mptest+0x25031c) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #8 mp::EventLoop::loop() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:231:68 (mptest+0x234699) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #9 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:92:20 (mptest+0x133819) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #10 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #11 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    #12 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Thread T10 (tid=8377, running) created by main thread at:
    #0 pthread_create <null> (mptest+0xa673e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #1 std::__1::__libcpp_thread_create[abi:ne200100](unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/support/pthread.h:182:10 (mptest+0x132c30) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #2 std::__1::thread::thread<mp::test::TestSetup::TestSetup(bool)::'lambda'(), 0>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:218:14 (mptest+0x132c30)
    #3 mp::test::TestSetup::TestSetup(bool) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:62:11 (mptest+0x12f8dd) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #4 mp::test::TestCase251::run() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:272:15 (mptest+0x12e95e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    #5 kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::TestRunner::run()::'lambda'()>(kj::TestRunner::run()::'lambda'()&&) <null> (mptest+0x23e7c0) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

SUMMARY: ThreadSanitizer: data race (/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/mptest+0x12b1dc) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417) in operator delete(void*, unsigned long)

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 698fa06 to 444c86f Compare July 3, 2025 06:39
@Sjors
Copy link
Owner Author

Sjors commented Jul 3, 2025

I'll include the temporary version of your fix here for now so we can keep an eye on it, see 0e758cc.

Sjors and others added 6 commits July 3, 2025 12:08
This causes IPC binaries (bitcoin-node, bitcoin-gui) to be included
in releases.

The effect on CI is that this causes more depends builds to build IPC
binaries, but still the only build running functional tests with them
is the i686_multiprocess one.

Except for Windows.
The bitcoin-node binary is built on all platforms which have
multiprocess enabled, but for functional tests it's only used in
CentOS native (depends) job. The next commit will also add a
non-depends job.
Install capnp on non-depends CI jobs.

Use the bitcoin-node binary in the macOS native non-depends job.

Co-authored-by: Ryan Ofsky <[email protected]>
@Sjors Sjors force-pushed the 2025/02/ipc-yea branch from 314b428 to 1d9ba61 Compare July 3, 2025 10:10
@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 444c86f to 9ca3e6b Compare July 3, 2025 10:15
@Sjors
Copy link
Owner Author

Sjors commented Jul 3, 2025

CI passed for 444c86f. I pushed again to include the rebase of bitcoin#31802.

ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Jul 10, 2025
fix posted Sjors/bitcoin#90 (comment)

CI error attempting to fix (does not work because nothing prevents ~ProxyClient<Thread> object from being destoryed during m_disconnect_cb call if m_disconnect_cb is interrupted before successfully acquires Waiter::m_mutex, and mutex coudl be deleted while it is waiting or before it starts waiting):

WARNING: ThreadSanitizer: data race (pid=8333)
  Write of size 8 at 0x721000003e90 by thread T11 (mutexes: write M0, write M1):
    #0 operator delete(void*, unsigned long) <null> (mptest+0x12b1dc) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 void std::__1::__libcpp_operator_delete[abi:ne200100]<std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long>(std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__new/allocate.h:46:3 (mptest+0x235783) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#2 void std::__1::__libcpp_deallocate[abi:ne200100]<std::__1::__list_node<std::__1::function<void ()>, void*>>(std::__1::__type_identity<std::__1::__list_node<std::__1::function<void ()>, void*>>::type*, std::__1::__element_count, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__new/allocate.h:86:12 (mptest+0x235783)
    bitcoin-core#3 std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>::deallocate[abi:ne200100](std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator.h:120:7 (mptest+0x235783)
    bitcoin-core#4 std::__1::allocator_traits<std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>>::deallocate[abi:ne200100](std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>&, std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator_traits.h:302:9 (mptest+0x235783)
    bitcoin-core#5 std::__1::__list_imp<std::__1::function<void ()>, std::__1::allocator<std::__1::function<void ()>>>::__delete_node[abi:ne200100](std::__1::__list_node<std::__1::function<void ()>, void*>*) /usr/lib/llvm-20/bin/../include/c++/v1/list:571:5 (mptest+0x235783)
    bitcoin-core#6 std::__1::list<std::__1::function<void ()>, std::__1::allocator<std::__1::function<void ()>>>::erase(std::__1::__list_const_iterator<std::__1::function<void ()>, void*>) /usr/lib/llvm-20/bin/../include/c++/v1/list:1342:9 (mptest+0x235783)
    bitcoin-core#7 mp::Connection::removeSyncCleanup(std::__1::__list_iterator<std::__1::function<void ()>, void*>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:160:24 (mptest+0x235783)

   157  void Connection::removeSyncCleanup(CleanupIt it)
   158  {
   159      const Lock lock(m_loop->m_mutex);
   160      m_sync_cleanup_fns.erase(it);
   161  }

    bitcoin-core#8 mp::ProxyClient<mp::Thread>::~ProxyClient() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:339:31 (mptest+0x235783)

   333  ProxyClient<Thread>::~ProxyClient()
   334  {
   335      // If thread is being destroyed before connection is destroyed, remove the
   336      // cleanup callback that was registered to handle the connection being
   337      // destroyed before the thread being destroyed.
   338      if (m_disconnect_cb) {
   339          m_context.connection->removeSyncCleanup(*m_disconnect_cb);
   340      }
   341  }

    bitcoin-core#9 std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>::~pair() /usr/lib/llvm-20/bin/../include/c++/v1/__utility/pair.h:63:29 (mptest+0x1d2e8d) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#10 void std::__1::__destroy_at[abi:ne200100]<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>, 0>(std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/construct_at.h:66:11 (mptest+0x1d2e8d)
    bitcoin-core#11 void std::__1::allocator_traits<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>>>::destroy[abi:ne200100]<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>, void, 0>(std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>>&, std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator_traits.h:329:5 (mptest+0x1d2e8d)
    bitcoin-core#12 std::__1::__tree<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__map_value_compare<mp::Connection*, std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::less<mp::Connection*>, true>, std::__1::allocator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>>>::erase(std::__1::__tree_const_iterator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>*, long>) /usr/lib/llvm-20/bin/../include/c++/v1/__tree:2047:3 (mptest+0x1d2e8d)
    bitcoin-core#13 unsigned long std::__1::__tree<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__map_value_compare<mp::Connection*, std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::less<mp::Connection*>, true>, std::__1::allocator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>>>::__erase_unique<mp::Connection*>(mp::Connection* const&) /usr/lib/llvm-20/bin/../include/c++/v1/__tree:2067:3 (mptest+0x1d2e8d)
    bitcoin-core#14 std::__1::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::__1::less<mp::Connection*>, std::__1::allocator<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>>>::erase[abi:ne200100](mp::Connection* const&) /usr/lib/llvm-20/bin/../include/c++/v1/map:1320:79 (mptest+0x205841) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#15 std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:103:21 (mptest+0x205841)

   103                      KJ_DEFER(if (erase_thread) {
   104                          std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
   105                          // Call erase here with a Connection* argument instead
   106                          // of an iterator argument, because the `request_thread`
   107                          // iterator may be invalid if the connection is closed
   108                          // during this function call. More specifically, the
   109                          // iterator may be invalid because SetThread adds a
   110                          // cleanup callback to the Connection destructor that
   111                          // erases the thread from the map, and also because the
   112                          // ProxyServer<Thread> destructor calls
   113                          // request_threads.clear().
   114                          request_threads.erase(server.m_context.connection);
   115                      });

    bitcoin-core#16 kj::_::Deferred<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()>::run() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/common.h:2010:7 (mptest+0x205841)
    bitcoin-core#17 kj::_::Deferred<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()>::~Deferred() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/common.h:1999:5 (mptest+0x205841)
    bitcoin-core#18 std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:117:17 (mptest+0x205841)
    bitcoin-core#19 kj::Function<void ()>::Impl<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:142:14 (mptest+0x20552f) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#20 kj::Function<void ()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:119:12 (mptest+0x154867) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#21 void mp::Unlock<std::__1::unique_lock<std::__1::mutex>, kj::Function<void ()>&>(std::__1::unique_lock<std::__1::mutex>&, kj::Function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/util.h:198:5 (mptest+0x154867)

   194  template <typename Lock, typename Callback>
   195  void Unlock(Lock& lock, Callback&& callback)
   196  {
   197      const UnlockGuard<Lock> unlock(lock);
   198      callback();
   199  }

    bitcoin-core#22 void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:296:17 (mptest+0x237d69) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   284      template <class Predicate>
   285      void wait(std::unique_lock<std::mutex>& lock, Predicate pred)
   286      {
   287          m_cv.wait(lock, [&] {
   288              // Important for this to be "while (m_fn)", not "if (m_fn)" to avoid
   289              // a lost-wakeup bug. A new m_fn and m_cv notification might be sent
   290              // after the fn() call and before the lock.lock() call in this loop
   291              // in the case where a capnp response is sent and a brand new
   292              // request is immediately received.
   293              while (m_fn) {
   294                  auto fn = std::move(*m_fn);
   295                  m_fn.reset();
   296                  Unlock(lock, fn);
   297              }
   298              const bool done = pred();
   299              return done;
   300          });
   301      }

    bitcoin-core#23 void std::__1::condition_variable::wait<void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /usr/lib/llvm-20/bin/../include/c++/v1/__condition_variable/condition_variable.h:146:11 (mptest+0x237d69)
    bitcoin-core#24 void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:287:14 (mptest+0x237d69)
    bitcoin-core#25 mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:404:34 (mptest+0x237d69)

   404          g_thread_context.waiter->wait(lock, [] { return !g_thread_context.waiter; });

    bitcoin-core#26 decltype(std::declval<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>()()) std::__1::__invoke[abi:ne200100]<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x237d69)
    bitcoin-core#27 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x237d69)
    bitcoin-core#28 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x237d69)

  Previous read of size 8 at 0x721000003e90 by thread T10:
    #0 std::__1::__function::__value_func<void ()>::operator()[abi:ne200100]() const /usr/lib/llvm-20/bin/../include/c++/v1/__functional/function.h:436:12 (mptest+0x238638) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::function<void ()>::operator()() const /usr/lib/llvm-20/bin/../include/c++/v1/__functional/function.h:995:10 (mptest+0x238638)
    bitcoin-core#2 void mp::Unlock<mp::Lock, std::__1::function<void ()>&>(mp::Lock&, std::__1::function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/util.h:198:5 (mptest+0x238638)

   194  template <typename Lock, typename Callback>
   195  void Unlock(Lock& lock, Callback&& callback)
   196  {
   197      const UnlockGuard<Lock> unlock(lock);
   198      callback();
   199  }

    bitcoin-core#3 mp::Connection::~Connection() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:139:9 (mptest+0x232bcb) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   135      Lock lock{m_loop->m_mutex};
   136      while (!m_sync_cleanup_fns.empty()) {
   137          CleanupList fn;
   138          fn.splice(fn.begin(), m_sync_cleanup_fns, m_sync_cleanup_fns.begin());
   139          Unlock(lock, fn.front());
   140      }

    bitcoin-core#4 std::__1::default_delete<mp::Connection>::operator()[abi:ne200100](mp::Connection*) const /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:78:5 (mptest+0x13742b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#5 std::__1::unique_ptr<mp::Connection, std::__1::default_delete<mp::Connection>>::reset[abi:ne200100](mp::Connection*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:300:7 (mptest+0x13742b)
    bitcoin-core#6 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:79:71 (mptest+0x13742b)

    79                server_connection->onDisconnect([&] { server_connection.reset(); });

    bitcoin-core#7 kj::_::Void kj::_::MaybeVoidCaller<kj::_::Void, kj::_::Void>::apply<mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()&, kj::_::Void&&) /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/async-prelude.h:195:5 (mptest+0x13742b)
    bitcoin-core#8 kj::_::TransformPromiseNode<kj::_::Void, kj::_::Void, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'(), kj::_::PropagateException>::getImpl(kj::_::ExceptionOrValue&) /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/async-inl.h:739:31 (mptest+0x13742b)
    bitcoin-core#9 kj::_::TransformPromiseNodeBase::get(kj::_::ExceptionOrValue&) <null> (mptest+0x2df64c) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#10 mp::EventLoop::loop() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:231:68 (mptest+0x234699) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   231          const size_t read_bytes = wait_stream->read(&buffer, 0, 1).wait(m_io_context.waitScope);

    bitcoin-core#11 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:92:20 (mptest+0x133819) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#12 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#13 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    bitcoin-core#14 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Mutex M0 (0x721c00003790) created at:
    #0 pthread_mutex_lock <null> (mptest+0xa844b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::mutex::lock() <null> (libc++.so.1.0.20+0x713cc) (BuildId: 30ef7da36db6fb0c014ee96603f7649f755cb793)

  Mutex M1 (0x7fbbe45fd4e0) created at:
    #0 pthread_mutex_lock <null> (mptest+0xa844b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::mutex::lock() <null> (libc++.so.1.0.20+0x713cc) (BuildId: 30ef7da36db6fb0c014ee96603f7649f755cb793)
    bitcoin-core#2 mp::Connection::Connection(mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>&&, std::__1::function<capnp::Capability::Client (mp::Connection&)> const&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-io.h:330:11 (mptest+0x134cfb) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#3 std::__1::unique_ptr<mp::Connection, std::__1::default_delete<mp::Connection>> std::__1::make_unique[abi:ne200100]<mp::Connection, mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda'(mp::Connection&), 0>(mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>&&, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda'(mp::Connection&)&&) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:767:30 (mptest+0x1333f9) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#4 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:70:19 (mptest+0x1333f9)
    bitcoin-core#5 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#6 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    bitcoin-core#7 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Thread T11 (tid=8378, running) created by thread T10 at:
    #0 pthread_create <null> (mptest+0xa673e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::__libcpp_thread_create[abi:ne200100](unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/support/pthread.h:182:10 (mptest+0x236338) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#2 std::__1::thread::thread<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0, 0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:218:14 (mptest+0x236338)
    bitcoin-core#3 mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:397:17 (mptest+0x236338)
    bitcoin-core#4 mp::ThreadMap::Server::dispatchCallInternal(unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++:602:9 (mptest+0x231af8) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#5 mp::ThreadMap::Server::dispatchCall(unsigned long, unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++:591:14 (mptest+0x231af8)
    bitcoin-core#6 virtual thunk to mp::ThreadMap::Server::dispatchCall(unsigned long, unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++ (mptest+0x231af8)
    bitcoin-core#7 capnp::LocalClient::callInternal(unsigned long, unsigned short, capnp::CallContextHook&) <null> (mptest+0x25031c) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#8 mp::EventLoop::loop() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:231:68 (mptest+0x234699) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#9 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:92:20 (mptest+0x133819) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#10 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#11 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    bitcoin-core#12 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Thread T10 (tid=8377, running) created by main thread at:
    #0 pthread_create <null> (mptest+0xa673e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::__libcpp_thread_create[abi:ne200100](unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/support/pthread.h:182:10 (mptest+0x132c30) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#2 std::__1::thread::thread<mp::test::TestSetup::TestSetup(bool)::'lambda'(), 0>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:218:14 (mptest+0x132c30)
    bitcoin-core#3 mp::test::TestSetup::TestSetup(bool) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:62:11 (mptest+0x12f8dd) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#4 mp::test::TestCase251::run() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:272:15 (mptest+0x12e95e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#5 kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::TestRunner::run()::'lambda'()>(kj::TestRunner::run()::'lambda'()&&) <null> (mptest+0x23e7c0) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

SUMMARY: ThreadSanitizer: data race (/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/mptest+0x12b1dc) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417) in operator delete(void*, unsigned long)
@Sjors Sjors force-pushed the 2025/02/ipc-yea branch 3 times, most recently from 939d6f8 to 082b416 Compare July 24, 2025 14:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants