Skip to content

No retries on ConnectTimeoutError #951

Closed
@arogozhnikov

Description

@arogozhnikov

I'm observing ConnectTimeoutError sometimes, which is likely caused by the network and searching for a right way to handle thimeouts.

I see s3fs has retries with exponential backoff, but it ignores

How this can be tested/reproduced:

sudo apt install iproute2    # likely you already have it
sudo tc qdisc add dev eth0 root netem delay 3500ms    # set delay on network
# sudo tc qdisc del dev eth0 root     # use to remove delay when done with testing.

now in python:

import s3fs
s3fs.S3FileSystem().exists('s3://example-bucker/example-folder')

This crashes with

ConnectTimeoutError: Connect timeout on endpoint URL: "https://example-bucker.s3.us-west-2.amazonaws.com/example-folder"

This failure is expected, BUT when I insert logging in s3fs, it does not retry this exception (though default retries=5). Should this exception be retried?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions