Replies: 2 comments
-
Hi, thanks for using SFTPGo. The open-source version uses an unlinked disk file for transfers to and from cloud storage backends. I recall this was discussed also in previous threads. We've improved the feature in our private, Open Core, edition, which is used in our SaaS offerings and currently available exclusively to Enterprise plan subscribers. We have no plan to backport this improvement to the open-source version, sorry. Thank you for your understanding |
Beta Was this translation helpful? Give feedback.
-
Oh i see, I must have missed that, Oh well thats a bummer for me, maybe not the deal breaker but still a bit annoying but i undarstand why you have to do that. Looking forward for this improvment for free version. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm encountering an issue with SFTPGo, which I have running in Docker on my Unraid server. While most functionalities work correctly, I'm facing a significant problem when uploading large files—specifically, files that are considerably larger than my Docker image's apparent size limit (or the Unraid docker.img file if it's filling up).
For example, if I attempt to upload a 30GB file (and my Docker image size is configured to, or effectively limited to, 20GB), the space consumed by the Docker container steadily increases during the upload process. It eventually reaches 100% utilization, at which point the upload is interrupted ( I'm talking here only about webclient upload )
There's nothing indicative in the SFTPGo logs within Docker. SFTPGo is set up behind an Nginx Proxy, but the client_max_body_size (or equivalent) in Nginx is configured to a very high value (I also use Pingvin Share through the same proxy and have successfully uploaded files up to 40GB without any issues).
I'm struggling to determine what's causing this or where these large files are being temporarily written within the SFTPGo Docker container. In the SFTPGo configuration, I've tried mapping the temp_path to a directory on my host system (which is correctly mapped as a volume into the container), but no matter what path I specify, no temporary files appear in that designated location during uploads.
I'm out of ideas and quite frustrated, as I wanted to provide an easy way for my clients to upload files directly. However, since these individual files can be as large as 50GB, the current setup is unusable.
Interestingly, when the upload fails (once the container's reported space usage hits 100%), the upload is, of course, terminated, and the container's space usage then reverts to its normal size. I cannot understand why this is happening.
I've set upload_mode to 1 in SFTPGo (which should stream uploads to a temporary directory), but this has made no difference. It's completely unclear where this temporary directory is actually being created or where the data is being written; I've searched extensively, and it seems the data is being temporarily stored somewhere within the container's internal filesystem instead of the mapped temp_path. I've also tried upload_mode 0 (default), but the same issue persists. I've experimented with various paths for temp_path, but none seem to be respected, and no temporary files or folders are visible in the mapped locations. I've double-checked permissions, and all volume mappings appear to be correctly configured.
Beta Was this translation helpful? Give feedback.
All reactions