Is it possible to create 2 nodes sftpgo? #492
-
Hi, I am using the docker sftpgo, it is working very well. Is it possible to create 2 nodes sftpgo to providing HA sftp service? If yes, how to manage the configuration between 2 nodes? Thanks Best Regards, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
Hi, several options are available. It mostly depends on your use case. Please have a look at some previous issues/discussions for ideas: In general you can load balance incoming connections using haproxy or something similar. SFTPGo supports proxy protocol for SFTP/FTP. For HTTP you can use a standard reverse proxy. For the data provider you can simply use a shared database (MySQL/PostgreSQL) or configure replication/HA on the database itself. If you need a distributed data provider CockroachDB could be a convenient choice. To share data you don't have to do anything if you use a cloud provider like S3, GCS, Azure Blob, otherwise you can use something like Ceph or the ancient DRBD. A standard shared storage mounted via smb/nfs should work too. Some users uses a MinIO cluster for shared data. |
Beta Was this translation helpful? Give feedback.
-
I've got 2 nodes, in docker, behind a google tcp load balancer, sharing a common cloudsql database for the user list and a common bucket for file storage. Works really well, and because they are in an instance group with ahealthcheck (on /healthz) they will recover fairly seamlessly if one of the nodes fails. There's no reason that couldn't have a auto scaling group so if your load increases the number of serving nodes increases - though you'd be hard pushed to get enough traffic through it to make it scale. All my nodes are in the same region in different AZs , but in theory you could go global provided your database was accessible globally (or you had read-replicas set up. The only issue I had was that you need to set up sticky sesions on the admin insterface otherwise your token won't be valid when it directs you to the other node. The sftp port is fine, just the http port needs to be sticky. |
Beta Was this translation helpful? Give feedback.
-
We kind of plan to run a similar setup. We have a podman pod with 4 SFTPGo containers and 1 ClamAV. 3 SFTPGo servers are facing the internet and I use one for FTP and explicit FTPS. The 2. is implicit FTPS, the 3. is SFTP. Why? So I can potentially close 1 service and keep the others running with no interruptions. The 4th SFTPGo is for the internal system to fetch the uploaded data. In between are the ClamAV which scans all uploaded data before putting it on the 4th SFTPGo. Oh, and we plan to have a similar setup on a machine connected to the 2. ISP. |
Beta Was this translation helpful? Give feedback.
Hi,
several options are available. It mostly depends on your use case. Please have a look at some previous issues/discussions for ideas:
#281
#328
#466
In general you can load balance incoming connections using haproxy or something similar. SFTPGo supports proxy protocol for SFTP/FTP. For HTTP you can use a standard reverse proxy.
For the data provider you can simply use a shared database (MySQL/PostgreSQL) or configure replication/HA on the database itself. If you need a distributed data provider CockroachDB could be a convenient choice.
To share data you don't have to do anything if you use a cloud provider like S3, GCS, Azure Blob, otherwise you can use something like Ceph or the ancie…