binlog_server is a command line utility that can be considered as an enhanced version of mysqlbinlog in --read-from-remote-server mode which serves as a replication client and can stream binary log events from a remote Oracle MySQL Server / Percona Server for MySQL both to a local filesystem and to a cloud storage (currently to AWS S3 or S3-compatible service like MinIO). It is capable of automatically reconnecting to the remote server and resume operation from the point when it was previously stopped / terminated.
It is written in portable c++ following the c++20 standard best practices.
Currently prebuilt binaries are not available.
- CMake 3.20.0+
- Clang (
clang-15..clang-19) or GCC (gcc-12..gcc-14) - Boost libraries 1.88.0 (git version, not the source tarball)
- MySQL client library 8.0.x (
libmysqlclient) - CURL library (
libcurl) 8.6.0+ - AWS SDK for C++ 1.11.570
mkdir wsEvery next step will assume that we are currently inside the ws directory unless explicitly stated otherwise.
git clone https://github.com/Percona-Lab/percona-binlog-server.gitDefine BUILD_PRESET depending on whether you want to build in Debug, Release, or Debug with Address Sanitizer configuration and which toolset you would like to use.
export BUILD_PRESET=<configuration>_<toolset>The supported values for <configuration> are debug, release, and asan.
The supported values for <toolset> are gcc14 and clang19.
For instance, if you want to build in RelWithDebInfo configuration using GCC 14, please specify
export BUILD_PRESET=release_gcc14git clone --recurse-submodules -b boost-1.88.0 --jobs=8 https://github.com/boostorg/boost.git
cd boost
git switch -c required_releasecp ./percona-binlog-server/extra/cmake_presets/boost/CMakePresets.json ./boostcmake ./boost --preset ${BUILD_PRESET}cmake --build ./boost-build-${BUILD_PRESET} --parallelcmake --install ./boost-build-${BUILD_PRESET}git clone --recurse-submodules -b 1.11.570 --jobs=8 https://github.com/aws/aws-sdk-cpp
cd aws-sdk-cpp
git switch -c required_releasecp ./percona-binlog-server/extra/cmake_presets/aws-sdk-cpp/CMakePresets.json ./aws-sdk-cppcmake ./aws-sdk-cpp --preset ${BUILD_PRESET}cmake --build ./aws-sdk-cpp-build-${BUILD_PRESET} --parallelcmake --install ./aws-sdk-cpp-build-${BUILD_PRESET}Main application source code should have already been cloned from the git repo during the "Getting the build scripts and the source code" step.
cmake ./percona-binlog-server --preset ${BUILD_PRESET}cmake --build ./percona-binlog-server-build-${BUILD_PRESET} --parallelThe result binary can be found under the following path ws/percona-binlog-server-build-${BUILD_PRESET}/binlog_server
Please run
./binlog_server <operation_mode> [ <json_config_file> ]where
<operation_mode> can be either version, fetch, or pull
and
<json_config_file> is an optional parameter (required only when <operation_mode> is not version) that represents a path to a JSON configuration file (described below).
Percona Binary Log Server utility can operate in three modes:
- 'version'
- 'fetch'
- 'pull'
In this mode the utility simply prints its current semantic version (embedded into the binary) to the standard output and exits with "success" (0) exit code.
For instance,
./binlog_server versionmay print
0.1.0
In this mode the utility tries to connect to a remote MySQL server, switch connection to replication mode and read events from all available binary logs already stored on the server. After reading the very last event, the utility gracefully disconnects and exits. Any error (network issues, server down, out of space, etc) encountered in this mode results in immediate termination of the program making sure that storage is left in consistent state.
In this mode the utility continuously tries to connect to a remote MySQL server / switch to replication mode and read binary log events. After reading the very last one, the utility does not close the connection immediately but instead waits for <connection.read_timeout> seconds for the server to generate more events. If this period of time elapses, the utility closes the MySQL connection and enters the idle mode. In this mode it just waits for <replication.idle_time> seconds in disconnected state. After that another reconnection attempt is made and everything starts from the beginning.
Any network-related error (network issues, server down, etc) encountered in this mode does not result in immediate termination of the program. Instead, another reconnection attempt is made. More serious errors (like out of space, etc.) cause program termination.
The Percona Binary Log Server configuration file has the following format.
{
"logger": {
"level": "debug",
"file": "binsrv.log"
},
"connection": {
"host": "127.0.0.1",
"port": 3306,
"user": "rpl_user",
"password": "rpl_password",
"connect_timeout": 20,
"read_timeout": 60,
"write_timeout": 60,
"ssl": {
"mode": "verify_identity",
"ca": "/etc/mysql/ca.pem",
"capath": "/etc/mysql/cadir",
"crl": "/etc/mysql/crl-client-revoked.crl",
"crlpath": "/etc/mysql/crldir",
"cert": "/etc/mysql/client-cert.pem",
"key": "/etc/mysql/client-key.pem",
"cipher": "ECDHE-RSA-AES128-GCM-SHA256"
},
"tls": {
"ciphersuites": "TLS_AES_256_GCM_SHA384",
"version": "TLSv1.3"
}
},
"replication": {
"server_id": 42,
"idle_time": 10,
"verify_checksum": true
},
"storage": {
"backend": "s3",
"uri": "https://key_id:[email protected]:9000/binsrv-bucket/vault",
"fs_buffer_directory": "/tmp/binsrv",
"checkpoint_size": "128M",
"checkpoint_interval": "30s"
}
}<logger.level>sets the minimum severity of the log messages that user want to appear in the log output, can be one of thetrace/debug/info/warning/error/fatal(explained below).<logger.file>can be either a path to a file on a local filesytem to which all log messages will be written or an empty string""meaning that all the output will be made to console (STDOUT).
Each message written to the log has the severity level associated with it.
Currently we use the following mapping:
fatal- currently not used,error- used for printing messages coming from caught exceptions,warning- currently not used,info- primary log severity level used mostly to indicate progress (configuration file read, storage created, connection established, etc.),debug- used to print function names from caught exceptions and to print the data from parsed binary log events,trace- used to print source file name / line number / position from caught exceptions and to print raw data (hex dumps) of binary log events.
<connection.host>- MySQL server host name (e.g.127.0.0.1,192.168.0.100,dbsrv.mydomain.com, etc.). Please do not uselocalhosthere as it will be interpreted differently by thelibmysqlclientand will instruct the library to use Unix socket file for connection instead of TCP protocol - use127.0.0.1instead (see this page for more details).<connection.port>- MySQL server port (e.g.3306- the default MySQL server port).<connection.user>- the name of the MySQL user that has REPLICATION SLAVE privilege.<connection.password>- the password for this MySQL user.<connection.connect_timeout>- the number of seconds the MySQL client library will wait to establish a connection with a remote host.<connection.read_timeout>- the number of seconds the MySQL client library will wait to read data from a remote server (this parameter may affect the responsiveness of the program to graceful termination - see below).<connection.write_timeout>- the number of seconds the MySQL client library will wait to write data to a remote server.
<connection.ssl.mode>- specifies the desired security state of the connection to the MySQL server, can be one of thedisabled/preferred/required/verify_ca/verify_identity(--ssl-modemysqlutility command line option).<connection.ssl.ca>(optional) - specifies the file that contains the list of trusted SSL Certificate Authorities (--ssl-camysqlutility command line option).<connection.ssl.capath>(optional) - specifies the directory that contains trusted SSL Certificate Authority certificate files (an equivalent of the --ssl-capathmysqlutility command line option).<connection.ssl.crl>(optional) - specifies the file that contains certificate revocation lists (--ssl-crlmysqlutility command line option).<connection.ssl.crlpath>(optional) - specifies the directory that contains certificate revocation-list files (an equivalent of the --ssl-crlpathmysqlutility command line option).<connection.ssl.cert>(optional) - specifies the file that contains a X.509 client certificate (--ssl-certmysqlutility command line option).<connection.ssl.key>(optional) - specifies the file that contains the private key associated with the<connection.ssl.cert>(--ssl-keymysqlutility command line option).<connection.ssl.cipher>(optional) - specifies the list of permissible ciphers for connection encryption (--ssl-ciphermysqlutility command line option).
<connection.tls.ca>(optional) - specifies the list of permissible TLSv1.3 ciphersuites for encrypted connections (--tls-ciphersuitesmysqlutility command line option).<connection.tls.version>(optional) - specifies the list of permissible TLS protocols for encrypted connections (--tls-versionmysqlutility command line option).
<replication.server_id>- specifies the server ID that the utility will be using when connecting to a remote MySQL server (similar to --connection-server-idmysqlbinlogcommand line option).<replication.idle_time>- the number of seconds the utility will spend in disconnected mode between reconnection attempts.<replication.verify_checksum>- a boolean value which specifies whether the utility should verify event checksums.
<storage.backend>- the type of the storage where the received binary logs should be stored:file- local filesystems3-AWS S3orS3-compatible server (MinIO, etc.)
<storage.uri>- specifies the location (either local or remote) where the received binary logs should be stored<storage.fs_buffer_directory>(optional) - specifies the location on the local filesystem where partially downloaded binlog files should be stored. If not specified, the value of the default OS temporary directory will be used (e.g. '/tmp' on Linux). Currently, this parameter is meaningful only for non-filestorage backends.<storage.checkpoint_size>(optional) - specifies data portion size after receiving which backend storage should flush its internal buffers and write received binlog data permanently. If not set or set to zero, checkpointing by size will be disabled. The value is expected to be a string containing an integer followed by an optional suffix 'K' / 'M' / 'G' / 'T' / 'P', e.g. /\d+[KMGTP]?/:- 'no suffix' (e.g. "42") means no multiplier, the size will be interpreted in bytes ('42 * 1' bytes)
- 'K' (e.g. "42K") means '2^10' multiplier ('42 * 1024' bytes)
- 'M' (e.g. "42M") means '2^20' multiplier ('42 * 1048576' bytes)
- 'G' (e.g. "42G") means '2^30' multiplier ('42 * 2^20' bytes)
- 'T' (e.g. "42T") means '2^40' multiplier ('42 * 2^40' bytes)
- 'P' (e.g. "42P") means '2^50' multiplier ('42 * 2^50' bytes)
<storage.checkpoint_interval>(optional) - specifies time interval after achieving which backend storage should flush its internal buffers and write received binlog data permanently. If not set or set to zero, checkpointing by time interval will be disabled. The value is expected to be a string containing an integer followed by an optional suffix 's' / 'm' / 'h' / 'd' , e.g. /\d+[smhd]?/:- 'no suffix' (e.g. "42") or 's' (e.g. "42s") means seconds
- 'm' (e.g. "42m") means minutes ('42 * 60' seconds)
- 'h' (e.g. "42h") means hours ('42 * 60 * 60' seconds)
- 'd' (e.g. "42d") means days ('42 * 60 * 60 *24' seconds)
- When
<storage.backend>is set tofile,<storage.uri>must befile://.... - When
<storage.backend>is set tos3,<storage.uri>can be either:s3://...forAWS S3,http://...orhttps://...forS3-compatible services.
In case of local filesystem, the URIs must have the following format.
file://<local_fs_path>, where <local_fs_path> is an absolute path on a local filesystem to a directory where downloaded binary log files must be stored. Relative paths are not supported. For instance, file:///home/user/vault.
Please notice 3 forward slashes / (2 from the protocol part file:// and 1 from the absolute path).
In case of AWS S3, the URIs must have the following format.
s3://[<access_key_id>:<secret_access_key>@]<bucket_name>[.<region>]/<path>, where:
<access_key_id>- the AWS key ID (the<access_key_id>/<secret_access_key>pair is optional),<secret_access_key>- the AWS secret access key (the<access_key_id>/<secret_access_key>pair is optional),<bucket_name>- the name of AWS S3 bucket in which the data must be stored,<region>- the name of the AWS region (e.g.us-east-1) where this bucket was created (optional, if omitted, it will be auto-detected),<path>- a virtual path (key prefix) inside the bucket under which all the binary log files will be stored.
In case of S3-compatible service with custom endpoint, the URIs must have the following format.
http[s]://[<access_key_id>:<secret_access_key>@]<host>[:<port>]/<bucket_name>/<path>, where:
<host>- either a host name or an IP address of anS3-compatible server,<port>- the port of anS3-compatible server to connect to (optional, if omitted, it will be either 80 or 443, depending of the URI scheme: HTTP or HTTPS). Please notice that in this case<bucket_name>must be specified as the very first segment of the URI path.
For example:
s3://binsrv-bucket/vault- no AWS credentials specified,binsrv-bucketbucket must be publicly write-accessible, the region will be auto-detected,/vaultwill be the virtual directory.s3://binsrv-bucket.us-east-1/vault- no AWS credentials specified,binsrv-bucketbucket must be publicly write-accessible, the bucket must be created in theus-east-1region,/vaultwill be the virtual directory.s3://key_id:[email protected]/vault-key_idwill be used asAWS_ACCESS_KEY_ID,secretwill be used asAWS_SECRET_ACCESS_KEY,binsrv-bucketwill be the name of the bucket, the bucket must be created in theus-east-1region,/vaultwill be the virtual directory.http://key_id:secret@localhost:9000/binsrv-bucket/vault-key_idwill be used asAWS_ACCESS_KEY_ID,secretwill be used asAWS_SECRET_ACCESS_KEY,binsrv-bucketwill be the name of the bucket,/vaultwill be the virtual directory,localhost:9000will be the custom endpoint of theS3-compatible server, the connection will be established via non-secure HTTP protocol.https://key_id:[email protected]:9000/binsrv-bucket/vault-key_idwill be used asAWS_ACCESS_KEY_ID,secretwill be used asAWS_SECRET_ACCESS_KEY,binsrv-bucketwill be the name of the bucket,/vaultwill be the virtual directory,192.168.0.100:9000will be the custom endpoint of theS3-compatible server, the connection will be established via secure HTTPS protocol.
Please note that S3 API does not provide a way to append a portion of data to an existing object. Currently, in our S3 storage backend "append" operations are implemented as complete object overwrites meaning data re-uploads. Practically, if your typical binlog file size is '1G' and you set <storage.checkpoint_size> to '256M', you will upload '256M + 512M + 768M + 1024M = 2560M' (about 2.5 times more then your binlog file size in this example). So, keep balance between the value of this parameter and your tipical binlog size. Similar concerns can be rised regarding enabling <storage.checkpoint_interval>.
Running the utility for the second time (in any mode) results in resuming streaming from the position at which the previous run finished.
The user can request the utility operating in either fetch or pull mode to be gracefully terminated leaving storage in consistent state. For this, the utility sets custom handlers for the the following POSIX signals.
SIGINT- for processing^Cin console.SIGTERM- for processingkill <pid>.
Because of the synchronous nature of the binlog API from the MySQL client library, there still may be a delay between receiving the signal and reacting to it. Worst case scenario, user will have to wait for <connection.read_timeout> seconds (the value from the configuration) + 1 (the granularity of sleep intervals in the idle mode) seconds.
Please note that killing the program with kill -9 <pid> does not guarantee to flush all the internal file buffers / upload temporary data to a cloud storage and may result in losing some progress.
Percona is dedicated to keeping open source open. Whenever possible, we strive to include permissive licensing for both our software and documentation. For this project, we are using version 2 of the GNU General Public License (GPLv2).