layout | title | nav_order |
---|---|---|
default |
Configure Log Export Container |
2 |
- LOG_EXPORT_CONTAINER_INPUT. Container input format (
syslog-json
,syslog-csv
,tcp-json
,tcp-csv
,file-json
orfile-csv
). Default:syslog-json
- LOG_EXPORT_CONTAINER_OUTPUT. Container output storage (
stdout
,remote-syslog
,s3
,cloudwatch
,splunk-hec
,datadog
,azure-loganalytics
,sumologic
,kafka
,mongo
,logz
,loki
,elasticsearch
and/orbigquery
). Default:stdout
. You could configure multiple storages, for example:stdout s3 datadog
.
When using LOG_EXPORT_CONTAINER_INPUT=file-json
or LOG_EXPORT_CONTAINER_INPUT=file-csv
add variables listed in CONFIGURE_FILE_INPUT.md
When using LOG_EXPORT_CONTAINER_OUTPUT=remote-syslog
add variables listed in CONFIGURE_REMOTE_SYSLOG.md
When using LOG_EXPORT_CONTAINER_OUTPUT=s3
add variables listed in CONFIGURE_S3.md
When using LOG_EXPORT_CONTAINER_OUTPUT=cloudwatch
add variables listed in CONFIGURE_CLOUDWATCH.md
When using LOG_EXPORT_CONTAINER_OUTPUT=splunk-hec
add variables listed in CONFIGURE_SPLUNK_HEC.md
When using LOG_EXPORT_CONTAINER_OUTPUT=datadog
add variables listed in CONFIGURE_DATADOG.md
When using LOG_EXPORT_CONTAINER_OUTPUT=azure-loganalytics
add variables listed in CONFIGURE_AZURE_LOGANALYTICS.md
When using LOG_EXPORT_CONTAINER_OUTPUT=sumologic
add variables listed in CONFIGURE_SUMOLOGIC.md
When using LOG_EXPORT_CONTAINER_OUTPUT=kafka
add variables listed in CONFIGURE_KAFKA.md
When using LOG_EXPORT_CONTAINER_OUTPUT=mongo
add variables listed in CONFIGURE_MONGO.md
When using LOG_EXPORT_CONTAINER_OUTPUT=logz
add variables listed in CONFIGURE_LOGZ.md
When using LOG_EXPORT_CONTAINER_OUTPUT=loki
add variables listed in CONFIGURE_LOKI.md
When using LOG_EXPORT_CONTAINER_OUTPUT=elasticsearch
add variables listed in CONFIGURE_ELASTICSEARCH.md
When using LOG_EXPORT_CONTAINER_OUTPUT=bigquery
add variables listed in CONFIGURE_BIGQUERY.md
When using syslog-json
or tcp-json
specify LOG_EXPORT_CONTAINER_DECODE_CHUNK_EVENTS=true
to decode chunk events. Possible values: true or false. It's not enabled by default. Please refer to CONFIGURE_SSH_DECODE for more information.
When using strongDM Audit specify LOG_EXPORT_CONTAINER_EXTRACT_AUDIT=activities/15 resources/480 users/480 roles/480
, it'll store the logs from strongDM Audit in your specified output. You can configure this option with whatever features and log extraction interval you want. It's not enabled by default. Please refer to CONFIGURE_SDM_AUDIT for more information.
We moved the section describing the variable LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES
to CONFIGURE_SDM_AUDIT file. Please refer to it to know the behavior with this two variables.
Log traces include: sourceAddress
and sourceHostname
. By default docker uses --net=bridge
networking. You need to enable --net=host
networking driver in order to see the real client/gateway IP and hostname, otherwise you will see the Docker Gateway's info, for example: "sourceAddress":"172.17.0.1"
IMPORTANT: The host networking driver only works in Linux
By default, the container just classifies the different log traces (e.g. start, chunk, postStart). There are no extra processing steps involved. However, you can include additional processing filters if needed. In order to do that, just override the process.conf
file. For more details, please refer to CONFIGURE_PROCESSING.md.
The current version of the container only supports rsyslog, please refer to the image below to observe a typical configuration:
It's possible to set up a high availability environment using an AWS Load Balancer with more than one LEC instance. Please refer to this tutorial.