This is a Python microservice created using FastAPI that provides a REST API for the Inventory Management System.
This microservice requires a MongoDB instance to run against.
- Docker and Docker Compose installed (if you want to run the microservice inside Docker)
- Python 3.12 and MongoDB 7.0 installed on your machine (if you are not using Docker)
- Public key (must be OpenSSH encoded) to decode JWT access tokens (if JWT authentication/authorization is enabled)
- MongoDB Compass installed (if you want to interact with the database using a GUI)
- This repository cloned
-
Create a
.env
file alongside the.env.example
file. Use the example file as a reference and modify the values accordingly.cp .env.example .env
-
Create a
logging.ini
file alongside thelogging.example.ini
file. Use the example file as a reference and modify it accordingly:cp logging.example.ini logging.ini
-
(Required only if JWT Auth is enabled) Inside the keys directory in the root of the project directory, create a copy of the public key generated by the authentication component. This is needed for decoding of JWT access tokens signed by the corresponding private key.
Ensure that Docker is installed and running on your machine before proceeding.
The easiest way to run the application with Docker for local development is using the docker-compose.yml
file. It is
configured to start
- A MongoDB instance that can be accessed at
localhost:27017
usingroot
as the username andexample
as the password - The application in a reload mode which using the mounted
inventory_management_system_api
directory means that FastAPI will watch for changes made to the code and automatically reload the application on the fly.
-
Build and start the Docker containers:
docker compose up
The microservice should now be running inside Docker at http://localhost:8000 and its Swagger UI could be accessed at http://localhost:8000/docs. A MongoDB instance should also be running at http://localhost:27017.
-
Follow the post setup instructions.
Use the Dockerfile
's dev
stage to run just the application itself in a container. Use this only for local
development (not production)! Mounting the inventory_management_system_api
directory to the container via a volume
means that FastAPI will watch for changes made to the code and automatically reload the application on the fly. The
application requires a MongoDB instance to run against and one can be started using the docker-compose.yml
file.
-
Start a MongoDB instance:
docker compose up --detach mongo-db
-
Build an image using the
Dockerfile
's dev stage from the root of the project directory:docker build --file Dockerfile --target dev --tag inventory-management-system-api:dev .
-
Start the container using the image built and map it to port
8000
locally (please note that the public key volume is only needed if JWT Auth is enabled):docker run \ --publish 8000:8000 \ --name inventory-management-system-api \ --network host \ --env-file ./.env \ --volume ./inventory_management_system_api:/app/inventory_management_system_api \ --volume ./keys/jwt-key.pub:/app/keys/jwt-key.pub \ --volume ./logging.ini:/app/logging.ini \ inventory-management-system-api:dev
The microservice should now be running inside Docker at http://localhost:8000 and its Swagger UI could be accessed at http://localhost:8000/docs.
-
Follow the post setup instructions
Use the Dockerfile
's test
stage to run all the tests in a container. Mounting the inventory_management_system_api
and test
directories to the container via volumes means that any changes made to the application or test code will
automatically be synced to the container next time you run the tests. The e2e tests require a MongoDB instance to run,
and one can be started using the docker-compose.yml
file.
-
Start a MongoDB instance:
docker compose up --detach mongo-db
-
Build an image using the
Dockerfile
'stest
stage from the root of the project directory:docker build --file Dockerfile --target test --tag inventory-management-system-api:test .
-
Run the tests using:
docker run \ --rm \ --name inventory-management-system-api-test \ --network host \ --volume ./inventory_management_system_api:/app/inventory_management_system_api \ --volume ./test:/app/test \ --volume ./logging.ini:/app/logging.ini \ inventory-management-system-api:test
Use the Dockerfile
's test
stage to run the unit tests in a container. Mounting the inventory_management_system_api
and test
directories to the container via volumes means that any changes made to the application or test code will
automatically be synced to the container next time you run the tests.
-
Build an image using the
Dockerfile
'stest
stage from the root of the project directory:docker build --file Dockerfile --target test --tag inventory-management-system-api:test .
-
Run the tests using:
docker run \ --rm \ --name inventory-management-system-api-test \ --volume ./inventory_management_system_api:/app/inventory_management_system_api \ --volume ./test:/app/test \ --volume ./logging.ini:/app/logging.ini \ inventory-management-system-api:test \ pytest --config-file test/pytest.ini --cov inventory_management_system_api --cov-report term-missing test/unit -v
Use the Dockerfile
's test
stage to run the e2e tests in a container. Mounting the inventory_management_system_api
and test
directories to the container via volumes means that any changes made to the application or test code will
automatically be synced to the container next time you run the tests. The e2e tests require a MongoDB instance to run,
and one can be started using the docker-compose.yml
file.
-
Start a MongoDB instance:
docker compose up --detach mongo-db
-
Build an image using the
Dockerfile
'stest
stage from the root of the project directory:docker build --file Dockerfile --target test --tag inventory-management-system-api:test .
-
Run the tests using:
docker run \ --rm \ --name inventory-management-system-api-test \ --network host \ --volume ./inventory_management_system_api:/app/inventory_management_system_api \ --volume ./test:/app/test \ --volume ./logging.ini:/app/logging.ini \ inventory-management-system-api:test \ pytest --config-file test/pytest.ini test/e2e -v
- You must have access to a MongoDB instance with at least one replica set.
Ensure that Python is installed on your machine before proceeding.
-
Create a Python virtual environment and activate it in the root of the project directory:
python -m venv venv source venv/bin/activate
-
Install the required dependencies using pip:
pip install .[dev] pip install -r requirements.txt
-
Start the application
fastapi dev inventory_management_system_api/main.py
The microservice should now be running locally at http://localhost:8000. The Swagger UI can be accessed at http://localhost:8000/docs.
-
Follow the post setup instructions
-
To run the unit tests, run :
pytest -c test/pytest.ini test/unit/
-
To run the e2e tests, run:
pytest -c test/pytest.ini test/e2e/
-
To run all the tests, run:
pytest -c test/pytest.ini test/
For development replica sets are required to be able to use transactions. To set up for use with a single replica set you should first generate a keyfile to use e.g.
openssl rand -base64 756 > /etc/mongodb/keys/rs_keyfile
chmod 0400 /etc/mongodb/keys/rs_keyfile
sudo chown 999:999 /etc/mongodb/keys/rs_keyfile
Then when starting the MongoDB instance you must ensure you also use the options found below
mongod --replSet rs0 --keyFile /etc/mongodb/keys/rs_keyfile
Once the MongoDB instance is running use mongosh
to login and run
rs.initiate( {
_id : "rs0",
members: [
{ _id: 0, host: "<hostname>:27017" }
]
})
replacing <hostname>
with the actual hostname for the replica set.
The simplest way to populate the database with mock data is to use the already created database dump. If using docker for development you may use
python ./scripts/dev_cli.py db-import
to populate the database with mock data.
If you wish to do this manually the full command is
docker exec -i ims-api-mongodb mongorestore --username "root" --password "example" --authenticationDatabase=admin --db ims --archive < ./data/mock_data.dump
Otherwise, there is a script to generate mock data for testing purposes given in ./scripts/generate_mock_data.py
. To
use it from your development environment first ensure the API is running and then execute it with
python ./scripts/generate_mock_data.py
The easiest way to generate new mock data assuming you are using Linux is via the dev_cli script. To do this use
python ./scripts/dev_cli.py db-generate
This will clear the database, import the default data e.g. units and then generate mock data. If
the generate_mock_data.py
script is changed, or if there are database model changes please use
python ./scripts/dev_cli.py db-generate -d
to update the ./data/mock_data.dump
file and commit the changes.
The parameters at the top of the generate_mock_data.py
file can be used to change the generated data. NOTE: This
script will simply add to the existing database instance. So if you wish to update the mock_data.dump
, you should
first clear the database e.g. using
docker exec -i ims-api-mongodb mongosh ims --username "root" --password "example" --authenticationDatabase=admin --eval "db.dropDatabase()"
Then generate the mock data using
python ./scripts/generate_mock_data.py
and then update the ./data/mock_data.dump
file using mongodump
via
docker exec -i ims-api-mongodb mongodump --username "root" --password "example" --authenticationDatabase=admin --db ims --archive > ./data/mock_data.dump
The configuration for the application is handled
using Pydantic Settings. It allows for loading config
values from environment variables or the .env
file. Please note that even when using the .env
file, Pydantic will
still read environment variables as well as the .env
file, environment variables will always take priority over
values loaded from the .env
file.
Listed below are the environment variables supported by the application.
Environment Variable | Description | Mandatory | Default Value |
---|---|---|---|
API__TITLE |
The title of the API which is added to the generated OpenAPI. | No | Inventory Management System API |
API__DESCRIPTION |
The description of the API which is added to the generated OpenAPI. | No | This is the API for the Inventory Management System |
API__ROOT_PATH |
(If using a proxy) The path prefix handled by a proxy that is not seen by the app. | No | |
API__ALLOWED_CORS_HEADERS |
The list of headers that are allowed to be included in cross-origin requests. | Yes | |
API__ALLOWED_CORS_ORIGINS |
The list of origins (domains) that are allowed to make cross-origin requests. | Yes | |
API__ALLOWED_CORS_METHODS |
The list of methods that are allowed to be used to make cross-origin requests. | Yes | |
AUTHENTICATION__ENABLED |
Whether JWT auth is enabled. | Yes | |
AUTHENTICATION__PUBLIC_KEY_PATH |
The path to the public key to be used for decoding JWT access token signed by the corresponding private key. | If JWT auth enabled | |
AUTHENTICATION__JWT_ALGORITHM |
The algorithm to use to decode the JWT access token. | If JWT auth enabled | |
DATABASE__PROTOCOL |
The protocol component (i.e. mongodb ) to use for the connection string for the MongoClient to connect to the database. |
Yes | |
DATABASE__USERNAME |
The database username to use for the connection string for the MongoClient to connect to the database. |
Yes | |
DATABASE__PASSWORD |
The database password to use for the connection string for the MongoClient to connect to the database. |
Yes | |
DATABASE__HOST_AND_OPTIONS |
The host (and optional port number) component as well specific options (if any) to use for the connection string for the MongoClient to connect to the database. The host component is the name or IP address of the host where the mongod instance is running, whereas the options are <name>=<value> pairs (i.e. ?authMechanism=SCRAM-SHA-256&authSource=admin ) specific to the connection. If specified, only the value of readPreference=primary should be used.
|
Yes | |
DATABASE__NAME |
The name of the database to use for the MongoClient to connect to the database. |
Yes | |
OBJECT_STORAGE__ENABLED |
Whether the API is using Object Storage API to allow attachments and image uploads for the catalogue items, items, and systems. | Yes | |
OBJECT_STORAGE__API_REQUEST_TIMEOUT_SECONDS |
The maximum number of seconds that the request should wait for a response from the Object Storage API before timing out. | If Object Storage enabled | |
OBJECT_STORAGE__API_URL |
The URL of the Object Storage API. | If Object Storage enabled |
This microservice supports JWT authentication/authorization and this can be enabled or disabled by setting
the AUTHENTICATION__ENABLED
environment variable to True
or False
. When enabled, all the endpoints require a JWT
access token to be supplied. This ensures that only authenticated and authorized users can access the resources. To
decode the JWT access token, the application needs the public key that corresponding to the private key used for
encoding the token. Once the JWT access token is decoded successfully, it checks that it has a username
in the
payload, and it has not expired. This means that any microservice can be used to generate JWT access tokens so long as
it meets the above criteria. The LDAP-JWT Authentication Service is
a microservice that provides user authentication against an LDAP server and returns a JWT access token.
To add a migration first use
ims-migrate create <migration_name> <migration_description>
to create a new one inside the inventory_management_system/migrations/scripts
directory. Then add the code necessary
to perform the migration. See _example_migration.py
for an example on how to implement one.
Before performing a migration you can first check the current status of the database and any outstanding migrations using
ims-migrate status
or in Docker
docker exec -it inventory-management-system-api ims-migrate status
Then to perform all outstanding migrations up to the latest one use
ims-migrate forward latest
You may also specify a specific migration name to apply instead which will apply all migrations between the current applied one and the specified one. A prompt will be shown to ensure the migrations being applied are sensible.
To revert the database by performing backwards migrations you can first use
ims-migrate status
to check the current status of the database and available migrations and then use
ims-migrate backward <migration_name>
to perform all backward migrations to get from the current database state back to the state prior to the chosen migration name (i.e. it also performs the backward migration for the given migration name).
If for some reason the migration state is different to what you expect it may be forced via
ims-migrate set <migration_name>
This is already set to latest
automatically when using the dev_cli
to regenerate mock data so that the dump retains
the expected state.