Skip to content

bunchofstring/dcs

Repository files navigation

badge

Docker Compose Sandbox (DCS)

An attempt to build knowledge about containerization, Python, and GitHub Actions. This repo began as a series of mostly unrelated experiments. Eventually, they coalesced into a simple concept.

Below are some features of DCS

  1. Service provides a greeting from a "worker"
  2. Workers are protected by an intermediary
  3. Tests help keep workers healthy

Below is a high-level diagram of the software in this repo and some tools that support its ongoing development. Note that for maintenance purposes the editable diagram is embedded in the image file at doc/resources/sandbox_overview.drawio.png. To update the diagram, use the web-based WYSIWYG editor at https://diagrams.net/ to arrange boxes, lines, etc.


Get Started

The following is a guide to get DCS up and running. It also describes how to test the software.

Step Notes
1. Clone this repo Open a shell and enter the following command
git clone https://github.com/bunchofstring/dcs.git

Guide: https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository
2. Install Docker Recommendation is to install Docker Desktop version 4.12.0 or greater (which includes Docker 20.10.17 or greater). Ref: https://docs.docker.com/get-docker/
3. Run DCS containers Open a shell and navigate to the repo that was cloned in step 1. Enter the following command
docker compose up -d
4. Observe DCS in action! Open a web browser and navigate to http://127.0.0.1:8080/worker. The page is generated by the services started in the previous step. Refresh the page and notice that the text changes every time.
5. (Optional) Install Python Version 3.11 or greater. This step is required to run the Python-based worker or its tests outside of a container. Ref: https://www.python.org/downloads/

Test It!

In addition to the informal manual testing that takes place during development, some tests were automated and they are part of this repo. The commands in this section can be executed via shell from the directory where this repo was cloned (i.e. named "dcs" by default).

The automated tests are categorized according to their type - as system, integration, or unit. The functions themselves are annotated using markers of the same name (an example). For marker definitions, see pytest.ini.

This taxonomy highlights the nature of each test. Even this basic structure can reinforce good practices and provide a basis for measurement. Note that there are other ways to categorize the tests (e.g. smoke, sanity, performance etc.) but the main point is to start measuring! This can improve any practice and help tackle difficult questions as a product matures. Which tests are the most valuable? Wich ones are the most expensive to write and execute?

Below is a command to quickly count tests of each type. In the output, the first number on each line indicates the number of tests of that type. Note that the classic "test pyramid" shape is a good guideline, but it is not a strict requirement (ref: https://martinfowler.com/articles/2021-test-shapes.html).

pytest -m system --collect-only | grep "tests collected" && \
pytest -m integration --collect-only | grep "tests collected" && \
pytest -m unit --collect-only | grep "tests collected"
Sample output from the above command
System (4), integration (2), and unit (3) tests.
================= 4/9 tests collected (5 deselected) in 0.03s ==================
================= 2/9 tests collected (7 deselected) in 0.03s ==================
================= 3/9 tests collected (6 deselected) in 0.03s ==================

Expected behavior

Staged testing helps facilitate fast feedback on new code changes. The commands below will find and execute tests with the associated marker.

pytest -m unit && \
pytest -m integration && \
pytest -m system

Note: The test categories above are intentionally ordered. The idea is to run many fast tests up front and identify problems as directly as possible.

Acceptable performance

Apache Bench can generate significant load on the system and provides human-readable results. Thanks to a kind Internet stranger named Jordi (https://github.com/jig/docker-ab), it is conveniently packaged and available on Docker Hub. Give it a try! Execute the command below to perform 10,000 requests within 30 seconds.

docker run --rm jordi/ab -t 30 -n 10000 -c 5000 -l http://host.docker.internal:8080/worker/

Tip: host.docker.internal is the recommended way to resolve a host machine's IP address in this context (ref: https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host).

Sample output from the above command
The following is from a MacBook Pro, 2015 model
  • 2.2 GHz Quad-Core Intel Core i7
  • 16 GB 1600 MHz DDR3
  • macOS Monterey
Benchmarking host.docker.internal (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.23.1
Server Hostname:        host.docker.internal
Server Port:            8080

Document Path:          /worker/
Document Length:        Variable

Concurrency Level:      5000
Time taken for tests:   23.331 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      1361999 bytes
HTML transferred:       391999 bytes
Requests per second:    428.61 [#/sec] (mean)
Time per request:       11665.651 [ms] (mean)
Time per request:       2.333 [ms] (mean, across all concurrent requests)
Transfer rate:          57.01 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1  918 1331.2    454    7704
Processing:   117 1407 2200.5    568   23022
Waiting:        5 1383 2201.4    542   23021
Total:        286 2324 2589.7   1161   23272

Percentage of the requests served within a certain time (ms)
  50%   1161
  66%   1983
  75%   2402
  80%   2966
  90%   5277
  95%   8056
  98%  10505
  99%  13349
 100%  23272 (longest request)

Lessons Learned

Pytest

Some useful things about Pytest

  1. Verbose output for test execution. In the command to execute tests, enter the following immediately after pytest
    -vv --durations=0 -s

Docker and Docker Compose

A small collection of useful docker commands.

  1. Disconnect all containers from the default network
    for i in ` docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' dcs_default`; do docker network disconnect -f dcs_default $i; done;
  2. Stop all containers
    docker stop $(docker container list -q)
  3. Live stats about resource usage
    docker stats
  4. Clean up "orphans"
    docker compose down --remove-orphans
  5. Stop, rebuild, then start the containers
    docker compose down && docker compose build && docker compose up -d
  6. See program output during test execution. Use the -s flag as shown in the example below
    python3.11 -m pytest -vv --durations=0 -s -m system
    

About

Docker Compose Sandbox

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published