Description
Hello!
I'm currently trying to use honggfuzz for fuzzing a network interface using persistent fuzzing.
Here is my harness as of now (link refer to a specific commit): https://github.com/Devolutions/devolutions-gateway/blob/bf66f15933d9571574f2db8e65fd1c1019025551/fuzz/server/fuzz_targets/listeners_raw.rs
use honggfuzz::fuzz;
use server_fuzz::init;
use server_fuzz::oracles::raw::fuzz_listener;
fn main() {
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
let listeners = rt.block_on(init());
// At this point, sockets are binded and we can send data safely
loop {
fuzz!(|data: &[u8]| {
for l in &listeners {
fuzz_listener(data, l.addr().port());
let _ = rt.block_on(l.handle_one());
}
})
}
}
Note: the issue is the same regardless of the kind of tokio runtime used (new_current_thread()
and new_multi_thread()
both triggers the same behavior).
I'm running the fuzzing procedure with the following command:
$ RUSTFLAGS="-Z new-llvm-pass-manager=no -Z sanitizer=address" HFUZZ_RUN_ARGS="-t 10 -n 4 --tmout_sigvtalrm" cargo +nightly hfuzz run listeners_raw
-Z new-llvm-pass-manager=no
is because of #61
I introduced a panic when receiving specific pattern of bytes on the listener side just to test out. This cause the program to crash quick enough:
Sz:1566 Tm:252us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/17
Sz:268 Tm:425us (i/b/h/e/p/c) New:0/0/0/0/0/2, Cur:0/0/0/0/0/3
Sz:264 Tm:2,827us (i/b/h/e/p/c) New:0/0/0/0/0/2, Cur:0/0/0/0/0/29
rash (dup): 'hfuzz_workspace/listeners_raw/SIGABRT.PC.7ffff794b24c.STACK.f05f9f061.CODE.-6.ADDR.0.INSTR.mov____%eax,%ebp.fuzz' already exists, skipping
Sz:266 Tm:374us (i/b/h/e/p/c) New:0/0/0/3/0/4, Cur:0/0/0/3/0/2
Sz:452 Tm:918us (i/b/h/e/p/c) New:0/0/0/0/0/2, Cur:0/0/0/0/0/12
Crash (dup): 'hfuzz_workspace/listeners_raw/SIGABRT.PC.7ffff794b24c.STACK.f05f9f061.CODE.-6.ADDR.0.INSTR.mov____%eax,%ebp.fuzz' already exists, skipping
Crash (dup): 'hfuzz_workspace/listeners_raw/SIGABRT.PC.7ffff794b24c.STACK.f05f9f061.CODE.-6.ADDR.0.INSTR.mov____%eax,%ebp.fuzz' already exists, skipping
Sz:271 Tm:516us (i/b/h/e/p/c) New:0/0/0/1/0/59, Cur:0/0/0/3/0/13
z:5026 Tm:346us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/1
Sz:78 Tm:543us (i/b/h/e/p/c) New:0/0/0/2/0/19, Cur:0/0/0/2/0/23
Sz:149 Tm:401us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/1
Sz:269 Tm:1,005us (i/b/h/e/p/c) New:0/0/0/0/0/4, Cur:0/0/0/0/0/11
Sz:372 Tm:665us (i/b/h/e/p/c) New:0/0/0/0/0/3, Cur:0/0/0/0/0/38
Sz:156 Tm:547us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/16
z:8192 Tm:568us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/9
Sz:129 Tm:559us (i/b/h/e/p/c) New:0/0/0/0/0/2, Cur:0/0/0/0/0/5
Sz:279 Tm:510us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/16
Sz:264 Tm:548us (i/b/h/e/p/c) New:0/0/0/0/0/2, Cur:0/0/0/0/0/51
Sz:387 Tm:591us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/5
Sz:164 Tm:470us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/16
Sz:582 Tm:485us (i/b/h/e/p/c) New:0/0/0/0/0/2, Cur:0/0/0/0/0/5
z:378 Tm:508us (i/b/h/e/p/c) New:0/0/0/1/0/0, Cur:0/0/0/1/0/8
Sz:82 Tm:315us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/12
Sz:273 Tm:654us (i/b/h/e/p/c) New:0/0/0/0/0/2, Cur:0/0/0/0/0/16
Sz:164 Tm:643us (i/b/h/e/p/c) New:0/0/0/0/0/1, Cur:0/0/0/0/0/16
Sz:136 Tm:573us (i/b/h/e/p/c) New:0/0/0/0/0/3, Cur:0/0/0/0/0/7
Crash (dup): 'hfuzz_workspace/listeners_raw/SIGABRT.PC.7ffff794b24c.STACK.f05f9f061.CODE.-6.ADDR.0.INSTR.mov____%eax,%ebp.fuzz' already exists, skipping
2022-01-11T11:51:04-0500][W][41564] subproc_checkTimeLimit():529 pid=41571 took too much time (limit 10 s). Killing it with SIGVTALRM
[2022-01-11T11:51:04-0500][W][41565] subproc_checkTimeLimit():529 pid=41570 took too much time (limit 10 s). Killing it with SIGVTALRM
[2022-01-11T11:51:04-0500][W][41563] subproc_checkTimeLimit():529 pid=41568 took too much time (limit 10 s). Killing it with SIGVTALRM
[2022-01-11T11:51:05-0500][W][41564] subproc_checkTimeLimit():522 pid=41571 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41565] subproc_checkTimeLimit():522 pid=41570 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41564] subproc_checkTimeLimit():522 pid=41571 has already been signaled due to timeout. Killing it with SIGKILL
2022-01-11T11:51:05-0500][W][41565] subproc_checkTimeLimit():522 pid=41570 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41564] subproc_checkTimeLimit():522 pid=41571 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41565] subproc_checkTimeLimit():522 pid=41570 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41563] subproc_checkTimeLimit():522 pid=41568 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41565] subproc_checkTimeLimit():522 pid=41570 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41564] subproc_checkTimeLimit():522 pid=41571 has already been signaled due to timeout. Killing it with SIGKILL
2022-01-11T11:51:05-0500][W][41563] subproc_checkTimeLimit():522 pid=41568 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41564] subproc_checkTimeLimit():522 pid=41571 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41565] subproc_checkTimeLimit():522 pid=41570 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41563] subproc_checkTimeLimit():522 pid=41568 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41564] subproc_checkTimeLimit():522 pid=41571 has already been signaled due to timeout. Killing it with SIGKILL
[2022-01-11T11:51:05-0500][W][41565] subproc_checkTimeLimit():522 pid=41570 has already been signaled due to timeout. Killing it with SIGKILL
…-- continue --…
However, it appears the crashed threads are not able to continue fuzzing and I get the warning above ad vitam aeternam, and no progress can be made anymore. The behavior is the same regardless of --tmout_sigvtalrm
.