-
Notifications
You must be signed in to change notification settings - Fork 49
2. Complete Guide for Developers
BuffaLogs is a project for detecting anomalies in login data. In particular, after the collection of the login data which is stored on Elasticsearch and can be viewed on Kibana, BuffaLogs analyzes these logins and it uses internal logic to find unusual actions. Especially, the system can trigger 3 types of alerts: impossible travel, login from new device and login from new country.
Steps workflow for Developers:
-
All the developers have to set the enrivornment correctly → Follow the System Configuration guide
1a. Then, if you'd like to make front-end changes, so you need only to query the APIs → Upload the fixture data and read the API's section for more details about them
1b. Otherwise, if you want to analyze the back-end development generating new data, please check the BuffaLogs detection
Once you've cloned the project on your local device with git clone [email protected]:certego/BuffaLogs.git
, follow the steps below in order to set up the environment correctly.
-
Configure the Django-Admin.
a. Apply the migrations into the database schema with
./manage.py migrate
b. Create a superuser account (a user who has all permissions) with
./manage.py createsuperuser
c. Start the web server on the local machine launching
./manage.py runserver
At this point, use the credentials set above to login at localhost:8000/admin/, where you will see the tables used by BuffaLogs for its detection logic and now they are empty because we haven't run the BuffaLogs detection, yet.
If this is the first time you're running BuffaLogs, you need to configure Elasticsearch first:
- Generate login data
Run the python random_example.py
script in the examples folder in order to create random login data in Elasticsearch. This simulates some audit logs that forward event to our Elasticsearch.
-
Configure Elasticsearch.
a. Create index patterns. Browse Kibana at localhost:5601/ and create the index patterns selecting Stack Management → Index Patterns. The indexes necessary to visualize correctly all the random login created are:
Name Timestamp field Source cloud-* @timestamp This index patter should match the source cloud-test_data- fw-proxy-* @timestamp This index patter should match the source fw-proxy-test_data- weblog-* @timestamp This index patter should match the source weblog-test_data- b. Load template. Run the
./load_templates.sh
script in the /config/elasticsearch/ directory to upload the index template. Now, you should see the example template on Kibana in the Stack Management → Index Management → Index Templatesc. You could visualize all the login data on Kibana in the Discover section, separated by the different index.
Then, it is possible to run the detection manually via management command or automatically thanks to Celery.
You can start the detection thanks to the impossible_travel management command.
In particular, if you simply launch ./manage.py impossible_travel
, it will start the detection analyzing the Elasticsearch login data saved in the previous half an hour. Indeed, the command will create an entry in the TaskSettings database to save the start and end datetime on which the detection has run.
If you'd like to start the detection in a specific time frame, launch the command passing it the start and end arguments, for example: ./manage.py impossible_travel '2023-08-01 14:00:00' '2023-08-01 14:30:00'
. After that, you'll visualize the alerts returned by the BuffaLogs detection.
It's possible to execute the detection directly into the Buffalogs container running the same command above, after entering into the bash with docker compose -f docker-compose.yaml -f docker-compose.override.yaml -f docker-compose.elastic.yaml exec buffalogs bash
.
NOTE - Attention to timestamp (again): The alerts created timestamp will be relative to the moment in which you start the detection. This is important if you want to use the DRF APIs because some of them are referred to the timestamp of the login attempt and others to the alerts trigger timestamp.
The BuffaLogs detection is a periodic Celery task.
If you want to run it automatically, every 30 minutes, you have to start all the containers: docker compose -f docker-compose.yaml -f docker-compose.override.yaml -f docker-compose.elastic.yaml up -d
.
This command will run also Celery, Celery beat and Rabbitmq in order to handle the analysis in an automated way.
This is the way BuffaLogs is used in production; you can run random_example.py
script to save new login data on Elasticsearch that will be processed automatically every 30 minutes when the task is ran.
It's possible to run the ./manage.py loaddata alerts
command in order to upload directly BuffaLogs data on the database. The data loaded this way can be viewed in the Django-admin at localhost:8000/admin
or from the GUI at localhost:8000/
.
Attention to timestamp for APIs responses: The alerts triggered in this fixture are related to logins attempted at around Aug. 1, 2023, 2:00 p.m UTC
and Aug. 1, 2023, 2:05 p.m UTC
, BUT the alerts have been triggered by BuffaLogs detection between Aug. 1, 2023, 2:50 p.m. UTC
and Aug. 1, 2023, 3:00 p.m. UTC
.
So, what about the APIs? If you load this fixture and want to get all the data with the DRF APIs, use them in these ways:
- users_pie_chart_api is based on the
User.updated
field - http://localhost:8000/users_pie_chart_api/?start=2023-08-01T14:50:00Z&end=2023-08-01T14:55:00Z - alerts_line_chart_api is based on the
Alert.login_raw_data["timestamp"]
timeframe - for example:- HOUR implementation: http://localhost:8000/alerts_line_chart_api/?start=2023-08-01T13:00:00Z&end=2023-08-01T16:00:00Z
- DAY partition: http://localhost:8000/alerts_line_chart_api/?start=2023-07-31T00:00:00Z&end=2023-08-01T23:59:59Z
- MONTH division: http://localhost:8000/alerts_line_chart_api/?start=2023-07-01T00:00:00Z&end=2023-08-31T23:59:59Z
- world_map_chart_api is based on the
Alert.login_raw_data["timestamp"]
,Alert.login_raw_data["country"]
,Alert.login_raw_data["lat"]
andAlert.login_raw_data["lon"]
values - http://localhost:8000/world_map_chart_api/?start=2023-08-01T14:00:00Z&end=2023-08-01T14:05:00Z - alerts_api is based on the
Alert.created
field - http://localhost:8000/alerts_api/?start=2023-08-01T14:50:00Z&end=2023-08-01T14:55:00Z - risk_score_api is based on the
User.updated
value - http://localhost:8000/risk_score_api/?start=2023-08-01T14:50:00Z&end=2023-08-01T14:55:00Z
NOTE: In order to visualize graphically the alerts triggered by BuffaLogs, browse to localhost:8000/. BuffaLogs doesn't send new data on Elasticsearch, there are stored only the logins attempt by the users, taken and analyze directly by BuffaLogs itself.