CCX Notification Writer service
- Description
- Building
- Configuration
- Usage
- Metrics
- Database
- Definition of Done for new features and fixes
- Testing
- BDD tests
- Benchmarks
- Package manifest
The main task for this service is to listen to configured Kafka topic, consume
all messages from such topic, and write OCP results (in JSON format) with
additional information (like organization ID, cluster name, Kafka offset etc.)
into a database table named new_reports
. Multiple reports can be consumed and
written into the database for the same cluster, because the primary (compound)
key for new_reports
table is set to the combination (org_id, cluster, updated_at)
. When some message does not conform to expected schema (for
example if org_id
is missing for any reason), such message is dropped and the
error message with all relevant information about the issue is stored into the
log. Messages are expected to contain report
body represented as JSON.
This body is shrunk before it's stored into database so the database
remains relatively small.
Additionally this service exposes several metrics about consumed and processed messages. These metrics can be aggregated by Prometheus and displayed by Grafana tools.
Overall architecture and integration of this service into the whole pipeline is described in this document
Use make build
to build executable file with this service.
All Makefile targets:
Usage: make <OPTIONS> ... <TARGETS>
Available targets are:
clean Run go clean
build Build binary containing service executable
build-cover Build binary with code coverage detection support
fmt Run go fmt -w for all sources
lint Run golint
vet Run go vet. Report likely mistakes in source code
cyclo Run gocyclo
ineffassign Run ineffassign checker
shellcheck Run shellcheck
errcheck Run errcheck
goconst Run goconst checker
gosec Run gosec checker
abcgo Run ABC metrics checker
style Run all the formatting related commands (fmt, vet, lint, cyclo) + check shell scripts
run Build the project and executes the binary
test Run the unit tests
build-test Build native binary with unit tests and benchmarks
profiler Run the unit tests with profiler enabled
benchmark Run benchmarks
benchmark.csv Export benchmark results into CSV
cover Generate HTML pages with code coverage
coverage Display code coverage on terminal
bdd_tests Run BDD tests
before_commit Checks done before commit
function_list List all functions in generated binary file
help Show this help screen
Configuration is described in this document
Provided a valid configuration, you can start the service with ./ccx-notification-writer
List of all available command line options:
-authors
show authors
-check-kafka
check connection to Kafka
-db-cleanup
perform database cleanup
-db-drop-tables
drop all tables from database
-db-init
perform database initialization
-db-init-migration
initialize migration
-max-age string
max age for displaying/cleaning old records
-migrate string
set database version
-migration-info
prints migration info
-new-reports-cleanup
perform new reports clean up
-old-reports-cleanup
perform old reports clean up
-print-new-reports-for-cleanup
print new reports to be cleaned up
-print-old-reports-for-cleanup
print old reports to be cleaned up
-show-configuration
show configuration
-version
show version
In order to start the service, just ./ccx-notification-writer
is needed to be called from CLI.
It is possible to cleanup old records from new_reports
and reported
tables. To do it, use the following CLI options:
./ccx-notification-writer -old-reports-cleanup --max-age="30 days"
to perform cleanup of reported
table.
It is also possible to use following command
./ccx-notification-writer -new-reports-cleanup --max-age="30 days"
to perform cleanup of new_reports
table.
Additionally it is possible to just display old reports without touching the database tables:
./ccx-notification-writer -print-old-reports-for-cleanup --max-age="30 days"
or in case of new reports:
./ccx-notification-writer -print-new-reports-for-cleanup --max-age="30 days"
It is possible to use /metrics
REST API endpoint to read all metrics
exposed to Prometheus or to any tool that is compatible with it. Currently,
the following metrics are exposed:
notification_writer_check_last_checked_timestamp
- The total number of messages with last checked timestamp
notification_writer_check_schema_version
- The total number of messages with successful schema check
notification_writer_consumed_messages
- The total number of messages consumed from Kafka
notification_writer_consuming_errors
- The total number of errors during consuming messages from Kafka
notification_writer_marshal_report
- The total number of marshaled reports
notification_writer_parse_incoming_message
- The total number of parsed messages
notification_writer_shrink_report
- The total number of shrunk reports
notification_writer_stored_messages
- The total number of messages stored into database
notification_writer_stored_bytes
- The total number of bytes stored into database
For service running locally:
curl localhost:8080/metrics | grep ^notification_writer
PostgreSQL database is used as a storage for new reports and for already reported reports as well.
This service contains an implementation of a simple database migration mechanism that allows semi-automatic transitions between various database versions as well as building the latest version of the database from scratch.
Migration mechanism is described here
Latest database schema is described in this document
Please also look at detailed schema description for more details about tables, indexes, and keys.
service postgresql status
sudo service postgresql start
psql --user postgres
List all databases:
\l
Select the right database:
\c notification
List of tables:
\dt
List of relations
Schema | Name | Type | Owner
--------+--------------------+-------+----------
public | new_reports | table | postgres
public | notification_types | table | postgres
public | reported | table | postgres
public | states | table | postgres
public | migration_info | table | postgres
public | event_targets | table | postgres
public | read_errors | table | postgres
(7 rows)
Please look at DoD.md document for definition of done for new features and fixes.
Tests and its configuration is described in this document
Behaviour tests for this service are included in Insights Behavioral Spec repository. In order to run these tests, the following steps need to be made:
- clone the Insights Behavioral Spec repository
- go into the cloned subdirectory
insights-behavioral-spec
- run the
notification_writer_tests.sh
from this subdirectory
List of all test scenarios prepared for this service is available at https://redhatinsights.github.io/insights-behavioral-spec/feature_list.html#ccx-notification-writer
Benchmarks and its preparation and configuration is described in this document
Package manifest is available at docs/manifest.txt.