Skip to content

PtyProcess.spawn (and thus pexpect) slowdown in close() loop #50

@kolyshkin

Description

@kolyshkin

The following code in ptyprocess

# Do not allow child to inherit open file descriptors from parent,
# with the exception of the exec_err_pipe_write of the pipe
# and pass_fds.
# Impose ceiling on max_fd: AIX bugfix for users with unlimited
# nofiles where resource.RLIMIT_NOFILE is 2^63-1 and os.closerange()
# occasionally raises out of range error
max_fd = min(1048576, resource.getrlimit(resource.RLIMIT_NOFILE)[0])
spass_fds = sorted(set(pass_fds) | {exec_err_pipe_write})
for pair in zip([2] + spass_fds, spass_fds + [max_fd]):
os.closerange(pair[0]+1, pair[1])

is looping through all possible file descriptors in order to close those (note that closerange() implemented as a loop at least on Linux). In case the limit of open fds (aka ulimit -n, aka RLIMIT_NOFILE, aka SC_OPEN_MAX) is set too high (for example, with recent docker it is 1024*1024), this loop takes considerable time (as it results in about a million close() syscalls).

The solution (at least for Linux and Darwin) is to obtain the list of actually opened fds, and only close those. This is implemented in subprocess module in Python3, and there is a backport of it to Python2 called subprocess32.

This issue was originally reported to docker: docker/for-linux#502

Other good reason for using subprocess (being multithread-safe) is described in #43

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions