Skip to content

[pfSense]: Adding some parsing to pfsense integration #15884

@Maurice-De

Description

@Maurice-De

Integration Name

pfSense [pfsense]

Dataset Name

No response

Integration Version

1.23.1

Agent Version

8.18.8

OS Version and Architecture

Red Hat Enterprise Linux 9.6 (Security Onion installation)

User Goal

I am trying to get more logs parsed from PFSense, a lot is dropped right now:

  • NGINX parsing (webconfigurator logs)
  • CRON
  • Save all non-parsed logs with a tag pfsense-other (it would be better if this could be a toggle/checkbox like the preserve original event so people can decide themself if they want all other events or not)

Existing Features

A lot of logs are dropped while still potentially be of great value.

What did you see?

On this line begins the drop if it is not in that if statement.

Anything else?

For cron and nginx I have the following changes working in my setup. Keep in mind that I use the names and configs from the running elastic so I do not know how the template names should be called, but I try to put the lines with the code where I think the changes matches my changes(also there may be extra escape characters with my copy paste actions):

between line 110 and 111 I have added
{ "pipeline": { "name": "logs-pfsense.log-1.23.1-nginx", "if": "ctx.event.provider == 'nginx'" } }, { "pipeline": { "name": "logs-pfsense.log-1.23.1-cron", "if": "ctx.event.provider == 'cron'" } },

Then the drop line if statement on line 113 is changed to this:
"if": "![\"filterlog\", \"openvpn\", \"charon\", \"dhcpd\", \"dhclient\", \"dhcp6c\", \"unbound\", \"haproxy\", \"php-fpm\", \"squid\", \"snort\", \"nginx\", \"cron\"].contains(ctx.event?.provider)"

the grok pattern some changes to accompany the process paths if they are send with it on line 31 "Parse syslog header" plus adding some parsing if the path is send with the process:


{ "grok": { "field": "event.original", "patterns": [ "^(%{ECS_SYSLOG_PRI})?%{TIMESTAMP} %{GREEDYDATA:message}" ], "pattern_definitions": { "ECS_SYSLOG_PRI": "<%{NONNEGINT:log.syslog.priority:long}>(\\d )?", "TIMESTAMP": "(?:%{BSD_TIMESTAMP_FORMAT}|%{SYSLOG_TIMESTAMP_FORMAT})", "BSD_TIMESTAMP_FORMAT": "%{SYSLOGTIMESTAMP:_tmp.timestamp}(%{SPACE}%{BSD_PROCNAME}|%{SPACE}%{OBSERVER}%{SPACE}%{BSD_PROCNAME})(\\[%{POSINT:process.pid:long}\\])?:", "BSD_PROCNAME": "(?:%{UNIXPATH:process.executable}|%{NAME:process.name}|\\(%{NAME:process.name}\\))", "NAME": "[[[:alnum:]]_./-]+", "SYSLOG_TIMESTAMP_FORMAT": "%{TIMESTAMP_ISO8601:_tmp.timestamp8601}%{SPACE}%{OBSERVER}%{SPACE}%{PROCESS}%{SPACE}(%{POSINT:process.pid:long}|-) - (-|%{META})", "TIMESTAMP_ISO8601": "%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE:event.timezone}?", "OBSERVER": "(?:%{IP:observer.ip}|%{HOSTNAME:observer.name})", "UNIXPATH": "(/([\\w_%!$@:.,+~-]+|\\\\.)*)*", "PROCESS": "(?:\\(%{NAME:process.name}\\)|(?:%{UNIXPATH})%{BASEPATH:process.name}|%{UNIXPATH:process.executable})", "BASEPATH": "[[[:alnum:]]_%!$@:.,+~-]+", "META": "\\[[^\\]]*\\]" }, "description": "Parse syslog header" } }, { "grok": { "field": "process.executable", "patterns": [ "(?:.*/)?(?<process.name>[^/]+)$" ], "ignore_missing": true, "if": "ctx.process?.name == null && ctx.process?.executable != null", "ignore_failure": true, "description": "Fills process.name with process.executable" }

And added 2 pipelines, the first one for nginx:
[ { "grok": { "field": "message", "patterns": [ "%{IPORHOST:client_ip} - - \\[%{HTTPDATE:timestamp}\\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}\" %{NUMBER:response:int} (?:%{NUMBER:bytes:int}|-) \"%{DATA:referrer}\" \"%{DATA:user_agent.original}\"" ] } }, { "lowercase": { "field": "network.protocol", "ignore_missing": true } }, { "user_agent": { "field": "user_agent.original", "target_field": "user_agent", "ignore_missing": true } } ]

the second one for cron:
[ { "grok": { "field": "message", "patterns": [ "^\\\\(%{DATA:user.name}\\\\) CMD \\\\((?<process.command_line>.+)\\\\)$" ] } }, { "lowercase": { "field": "network.protocol", "ignore_missing": true } } ]

For the last part I really want to change the drop to catch, maybe someone with a little bit more knowledge can change this into a toggle for the integration, but to change the drop to parse I have the following:


instead of the drop on line 111, the following:
{ "pipeline": { "name": "logs-pfsense.log-1.23.1-other", "if": "![\"filterlog\", \"openvpn\", \"charon\", \"dhcpd\", \"dhclient\", \"dhcp6c\", \"unbound\", \"haproxy\", \"php-fpm\", \"squid\", \"snort\", \"nginx\", \"cron\"].contains(ctx.event?.provider)" } },
then a seperate pipeline:
[ { "lowercase": { "field": "network.protocol", "ignore_missing": true } }, { "append": { "field": "tags", "allow_duplicates": false, "value": [ "pfsense_other" ] } } ]

A couple of test events (wireshark capture):
nginx:
[…] Syslog message: LOCAL5.INFO: Nov 6 14:30:50 nginx: 192.168.1.1 - - [06/Nov/2025:14:30:50 +0100] "GET /pfblockerng/pfblockerng_dnsbl.php HTTP/2.0" 200 51559 "https://192.168.1.1/pfblockerng/pfblockerng_update.php" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36"

cron:
Syslog message: CRON.INFO: Nov 6 14:30:00 /usr/sbin/cron[12345]: (root) CMD (/usr/sbin/newsyslog)

another random non-parsed/dropped message:
Syslog message: DAEMON.INFO: Nov 6 14:00:00 lighttpd_pfb[12345]: [pfBlockerNG] DNSBL Webserver stopped

I hope all this information helps in improving the pfsense integration and please let me know if I can test something or if I need to supply some more information.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions