Skip to content

Deadlock if synchronous subprocess fills pipe #82

Open
@rongcuid

Description

@rongcuid

If a subprocess outputs large amount of data, it deadlocks both parent and subprocess due to pipe blocking:

#include <stdlib.h>
#include <stdio.h>

#include "subprocess.h"

int main() {
    const char *command_line[] = {"dd", "if=/dev/zero", "bs=1k", "count=65", NULL};
    struct subprocess_s process;
    int result = subprocess_create(command_line, subprocess_option_search_user_path, &process);
    if (result) {
        fprintf(stderr, "Failed to create subprocess: %d\n", result);
        return 1;
    }
    int proc_return;
    result = subprocess_join(&process, &proc_return);
    if (result) {
        fprintf(stderr, "Failed to join subprocess\n");
        return 1;
    }
    printf("Subprocess returned %d\n", proc_return);
    result = subprocess_destroy(&process);
    if (result) {
        fprintf(stderr, "Failed to join subprocess\n");
        return 1;
    }
    return 0;
}

Notice that subprocess dd writes 65k data to stdout, which is greater than Linux's default pipe buffer size, thus blocking the child. However, since subprocess_read_stdout must be used after joining, the parent process cannot progress either, as it can neither drain the pipe nor wait for child to finish.

Activity

sheredom

sheredom commented on Feb 20, 2024

@sheredom
Owner

I can't think of a better solution than to advise you use async. I don't really want to start spawning threads behind your back to handle this kind of thing (and don't know of another way to generally fix this!).

rongcuid

rongcuid commented on Feb 20, 2024

@rongcuid
Author

For my case, I am actually not using the outputs. So it might be good to allow ignoring stdout/stderr.

Actually, did I misunderstand something? It seems like I can read from stdout before I join. At least it works on Linux.

sheredom

sheredom commented on Feb 20, 2024

@sheredom
Owner

I might be able to add an option to ignore stdout/stderr aye.

You should be able to read before join iirc, its just that if you don't have enough data to read it could block forever.

rongcuid

rongcuid commented on Feb 20, 2024

@rongcuid
Author

I thought it should return EOF?

trapexit

trapexit commented on Oct 13, 2024

@trapexit

It would return EOF if the file descriptor is closed but if the app running and doesn't output it could block. I can't speak for Windows but really should be opening non blocking if not using select/poll/epoll to check for data availability.

Separately... I'm not sure I understand the "async" feature. On Unix at least it is just doing a read which would block and is not fundamentally different from getting the FILE* and calling fread except for buffering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @trapexit@sheredom@rongcuid

        Issue actions

          Deadlock if synchronous subprocess fills pipe · Issue #82 · sheredom/subprocess.h