Hardening_Sever is an Ansible codebase that applies a baseline security posture to Debian/Ubuntu hosts. It exposes a reusable roles:
hardening— OS-level lockdown covering sysctl, PAM, auditd, cron, authentication defaults, filesystem blacklisting, and legacy service removal.
Use the provided playbooks to roll out the baseline to fresh hosts or incorporate the roles into your own automation.
| Path | Description |
|---|---|
ansible.cfg |
Opinionated Ansible defaults (local inventory path, pipelining, host key checking disabled) for faster ad-hoc runs. |
inventory/inventory.yml |
Example inventory targeting the servers group. Replace host definitions with your infrastructure. |
playbooks/hardening.yml |
Entry point that applies the full operating-system hardening role to the servers group. |
roles/hardening/ |
Reusable OS hardening role (tasks, templates, defaults, handlers). |
-
Control node
- Create virtual env
- Ansible 2.12+ (tested with modern releases).
- Python 3.x with
ansible-galaxyavailable. - Create roles with ansible galaxy
-
Managed hosts
- Debian/Ubuntu family (the defaults assume APT,
/etc/login.defs, PAM profiles, etc.). - SSH connectivity with an account that can
become: true(role touches system files). - Package repositories reachable to install baseline packages (e.g.,
auditd,libpam-passwdqc).
- Debian/Ubuntu family (the defaults assume APT,
- Update the example inventory:
all:
children:
servers:
hosts:
server_group1:
ansible_host: ""
ansible_user: ""
ansible_port: ""-
(Optional) Test connectivity:
ansible -i inventory/inventory.yml servers -m ping
-
Run the baseline OS hardening:
ansible-playbook -i inventory/inventory.yml playbooks/hardening.yml
Key actions include:
- Refresh package cache, install baseline security packages (
openssh-server,auditd,libpam-modules,libpam-passwdqc), and purge legacy daemons such astelnetd/rsh/xinetd. - Apply kernel/network sysctl defaults sourced from
defaults/main.ymlviatemplates/sysctl-hardening.j2, then triggersysctl --systemreloads. - Lock down scheduled task infrastructure: secure
/etc/cron.*directories, set restrictive permissions on/etc/crontab, removecron.deny/at.deny, and maintaincron.allow/at.allowlists. - Normalize permissions on
/etc/passwd,/etc/group,/etc/shadow,/etc/gshadow. - Configure auditd with opinionated defaults (
templates/auditd.conf.j2) and restart the service when needed. - Enforce PAM password complexity (
templates/pam_passwdqc.j2) and faillock policies (faillockdefaults andcommunity.general.pamdedits). - Manage
/etc/login.defsguardrails (password rotation, retries, UMASK) and disable core dumps through/etc/security/limits.d/hardening.conf. - Set a global
umask 027profile, blacklist uncommon filesystems (templates/filesystems.conf.j2), and ensure handlers reload/restart services as appropriate.
All tunables are exposed in defaults/main.yml; override them in inventory/group vars to adapt to your policy.
- Restricts supported ciphers, MACs, and key exchange algorithms.
- Configures banner text, login grace period, and max authentication attempts.
- Validates changes with
sshd -tbefore applying the configuration.
The SSH-specific role:
- Ensures
/run/sshdexists and stops systemd socket activation (ssh.socket) so sshd runs as a traditional service. - Deploys
templates/sshd_config.j2, populated by defaults such as strong cipher/MAC/KEX suites, root login policy, session limits, and logging verbosity. - Creates an empty revoked-keys list and guarantees the classic
sshservice is enabled/restarted via the role handler.
Customize behavior through defaults/main.yml (e.g., sshd_port, crypto suites, login controls) or override per-host.
-
Creates /run/sshd with secure permissions so that the SSH daemon can start reliably.
-
Deploys a hardened sshd_config to /etc/ssh/sshd_config from the sshd_config.j2 template:
-
Validates the configuration with sshd -t -f before applying it.
-
Sets root ownership and 0600 permissions.
-
Keeps a backup of the previous configuration.
-
Notifies a handler to restart SSH when the configuration changes.
-
Ensures a revoked keys file exists at the path defined by sshd_revoked_keys_file (default: /etc/ssh/revoked_keys) with appropriate permissions.
-
Configuration is driven by variables, allowing you to tune SSH hardening:
-
Basic settings
-
sshd_port: SSH listening port (default: 22).
-
sshd_host_keys: List of host key files (ed25519 & RSA).
-
Authentication
-
sshd_permit_root_login: Controls if root login is allowed.
-
sshd_password_authentication: Disables password-based auth.
-
sshd_kbd_interactive_authentication, sshd_challenge_response_authentication: Disable legacy interactive methods.
-
sshd_pubkey_authentication: Enables public key authentication.
-
Session & forwarding restrictions
-
sshd_x11_forwarding, sshd_allow_agent_forwarding,
-
sshd_allow_tcp_forwarding, sshd_permit_tunnel,
-
sshd_allow_stream_local_forwarding, sshd_permit_user_environment:
-
All disabled to reduce attack surface.
-
Connection limits & timeouts
-
sshd_client_alive_interval, sshd_client_alive_count_max: Idle session timeouts.
-
sshd_login_grace_time: Time allowed for authentication.
-
sshd_max_auth_tries: Limits failed auth attempts.
-
sshd_max_sessions: Limits concurrent sessions per connection.
-
PAM and login banners
-
sshd_use_pam: Enables PAM integration.
-
sshd_print_motd, sshd_print_last_log: Control MOTD and last login message.
-
Networking & DNS
-
sshd_use_dns: Disables reverse DNS lookups to speed up logins.
-
Cryptography (aligned with DevSec recommendations)
-
sshd_ciphers: Restricts SSH to modern, strong ciphers (chacha20, AES-GCM, AES-CTR).
-
sshd_macs: Restricts MACs to SHA-2 based algorithms.
-
sshd_kex: Restricts key exchange algorithms to strong, modern groups (curve25519, ECDH, strong DH groups).
-
Logging and SFTP
-
sshd_syslog_facility: Uses AUTHPRIV for sensitive auth logs.
-
sshd_log_level: Uses VERBOSE to capture more detailed SSH activity.
-
sshd_revoked_keys_file: Path to the file containing revoked public keys.
-
sshd_subsystem_sftp: Uses the built-in internal-sftp subsystem.
ansible-playbook -i inventory/inventory.yml playbooks/SSH.ymlThis Ansible role automates the installation and configuration of Docker and Docker Compose on Ubuntu-based systems. It ensures that all required dependencies, repositories, and binaries are properly installed and up to date.
- Updates and upgrades APT packages
- Installs required Docker dependencies
- Adds Docker’s official GPG key and repository
- Installs Docker Engine, CLI, and containerd
- Downloads and installs the latest Docker Compose release directly from GitHub
- Includes a handler to restart Docker when needed
Updates the APT cache and upgrades existing system packages to ensure the system is prepared for Docker installation.
Installs essential packages needed for Docker installation such as:
apt-transport-httpscurlca-certificatesgnupg-agentsoftware-properties-common
These packages allow the system to handle HTTPS repositories and manage GPG keys.
Downloads and adds Docker’s official GPG key to the system so that package authenticity can be verified.
Adds Docker’s stable repository for Ubuntu (bionic in this example) to APT sources so that the latest official Docker packages can be installed.
Installs:
docker-cedocker-ce-clicontainerd.io
This ensures a complete Docker installation including the CLI and container runtime.
Refreshes the APT cache after adding the Docker repository. Triggers a handler to restart Docker if changes occur.
Queries the Docker Compose GitHub API to retrieve the tag name of the latest release.
Downloads the corresponding Docker Compose binary and places it in /usr/local/bin/compose with executable permissions.
This Ansible playbook automates the setup and deployment of a self-hosted GitLab instance using Docker Compose. It also configures a daily cron job to ensure regular backups of the GitLab instance.
The primary goal of this project is to provide a simple and repeatable way to deploy GitLab. The playbook performs the following actions:
- Creates a project directory on the target host.
- Copies necessary files (like
compose.ymland.env) to the project directory. - Secures the
.envfile by setting strict file permissions. - Deploys GitLab using Docker Compose, always pulling the latest image.
- Schedules a daily backup of the GitLab data using a cron job.
- Ansible 2.10+
- A user with
sudoprivileges. - Docker Engine.
- Docker Compose V2 plugin.
- Python 3.x (required for Ansible modules).
Before running the playbook, you need to configure the deployment variables.
-
Ansible Variables: Set the following variables in your inventory file or as extra-vars:
project_path: The absolute path on the target host where the GitLab project files will be stored (e.g.,/var/gitlab).role_path: The path to the Ansible role containing the playbook tasks and files.
-
Project Files: Place the following files in the
files/directory within your Ansible role:compose.yml: Your Docker Compose file that defines the GitLab service and any related containers. The GitLab service should be namedgitlabfor the backup command to work correctly..env: An environment file containing sensitive data or environment-specific configurations for your Docker Compose setup (e.g., GitLab root password, domain name, etc.).
To run the playbook, use the ansible-playbook command and specify your inventory file.
ansible-playbook -i your_inventory.yml playbooks/YOUR_PLAYBOOKThis Ansible playbook automates the installation and registration of a GitLab Runner on a target host. It allows you to quickly add new runners to your GitLab instance (self-hosted or gitlab.com) for running CI/CD jobs.
The playbook performs the following key steps:
- Repository Setup: Adds the official GitLab Runner package repository to the system.
- Installation: Installs the
gitlab-runnerpackage. - Registration: Registers the runner with a GitLab instance non-interactively using a registration token.
- Service Management: Ensures the
gitlab-runnerservice is started and enabled on boot.
- Ansible 2.10+
- Ansible collections
community.generalmight be required for certain modules.
- A Debian-based (e.g., Ubuntu) or RHEL-based (e.g., CentOS) Linux distribution. The playbook examples provided are for Debian-based systems.
- A user with
sudoprivileges. - Network access to the GitLab instance.
- Docker (Optional): If you plan to use the
dockerexecutor, Docker must be installed and running on the target host.
To run this playbook, you must configure the following variables. It is strongly recommended to use Ansible Vault to encrypt sensitive values like registration_token.
| Variable | Description | Example Value |
|---|---|---|
gitlab_url |
The URL of your GitLab instance. | "https://gitlab.com/" |
registration_token |
The runner registration token. You can find this in your GitLab project or group under Settings > CI/CD > Runners. | "your_secret_token_here" |
runner_description |
A description for the runner that will appear in the GitLab UI. | "Ansible Deployed Docker Runner" |
runner_tags |
A comma-separated list of tags for the runner, used to specify which jobs it can run. | "docker,linux,production" |
runner_executor |
The executor to use for running jobs. Common choices include shell, docker, kubernetes. |
"docker" |
docker_default_image |
(Optional) The default Docker image to use if the executor is docker and a job does not specify an image. |
"ruby:2.7" |
You can define these variables in your inventory, a group variables file (group_vars/all.yml), or pass them as extra-vars.
---
# GitLab Runner Configuration
gitlab_url: "https://gitlab.com/"
registration_token: "{{ vault_registration_token }}" # Stored in Ansible Vault
runner_description: "Ansible Deployed Docker Runner"
runner_tags: "docker,linux,production"
runner_executor: "docker"
docker_default_image: "ubuntu:20.04"This project provisions Sonatype Nexus Repository Manager using Docker Compose and configures a Docker Hosted repository automatically via Ansible.
The Ansible role performs these steps:
- Creates the project directory on the target host (
{{ project_path }}). - Copies the entire role
files/directory into the project directory. - Brings up the Nexus stack using Docker Compose v2 (
compose.yml). - Creates a Docker Hosted repository in Nexus using the REST API:
- Repository name:
docker-hosted - HTTP port:
8083 forceBasicAuth: truev1Enabled: false
- Repository name:
- Ansible installed on the control machine
- Target host:
- Docker Engine installed
- Docker Compose v2 installed
- Ansible collection:
community.docker
Install the collection:
ansible-playbook playbook/nexusSetup.yml -i inventory/inventory.ymlThe Voting App is a demonstration of a modern cloud-native application, structured around microservices to ensure scalability, high availability, and resilience. The architecture is optimized to run in a Docker environment, leveraging containerized services and cloud-native patterns to deliver a fault-tolerant voting platform.
This app includes a collection of microservices that work together to create a seamless voting experience. With PostgreSQL as the persistent database and Redis for fast in-memory storage and message brokering, the infrastructure is built to handle real-time voting processes efficiently.
The Voting App is a sample application developed to demonstrate:
- Microservices Architecture: Each service runs independently, scaling as needed based on workload.
- Service Orchestration: Managed through Docker , which ensures automatic recovery, scaling, and deployment of services.
- Resilient Data Management: Combining PostgreSQL and Redis to manage both temporary and permanent data storage with high availability.
- Dynamic Service Discovery: Traefik, a dynamic reverse proxy, automatically routes and balances traffic across services, providing HTTPS termination and security through Let's Encrypt.
- Vote Service: A front-end web application where users can cast votes. It serves as the user-facing entry point of the app.
- Worker Service: A .NET-based background processor that consumes votes from Redis and stores them in the PostgreSQL database.
- Result Service: A Node.js application responsible for displaying real-time voting results, showing users how the voting trends are shaping up.
- PostgreSQL Database: Manages permanent storage of voting results. Configured for high availability with replication and failover.
- Redis Cluster: Handles temporary storage of vote data and acts as a message broker between the vote and worker services for efficient processing.
Traefik is used for dynamic service discovery and routing within the Docker environment. It manages incoming requests, balancing them across services based on traffic and availability.
- SSL Support: Traefik automatically generates and manages SSL certificates through Let's Encrypt, ensuring all communication is secure.
- Service Routing: It defines routing rules for services based on hostnames, ensuring traffic is directed to the appropriate service (Voting or Results).
PostgreSQL is deployed with high availability using Bitnami’s PostgreSQL image, enhanced with Replication Manager (repmgr) to ensure automatic failover.
- Replication: Set up as master-slave replication to ensure data redundancy and prevent data loss.
- Pgpool: This component balances the load across the database nodes, handling multiple concurrent connections and improving performance.
Redis operates as both a caching layer and a message broker, using a master-slave architecture to ensure reliability and fast data access.
- High Availability: Redis Sentinel monitors the health of the nodes and automatically promotes a slave to master in case of failure.
- Load Balancing: HAProxy is used to distribute the load across Redis instances, maximizing efficiency and response times.
- Microservices-Based: Each service is independently deployable, allowing the infrastructure to scale efficiently.
- Scalable and Resilient: With Docker managing service orchestration, the app is highly available and capable of handling increased traffic without downtime.
- High Availability: PostgreSQL and Redis are configured with high-availability mechanisms like replication and failover to ensure data is always accessible and the app remains operational.
- HTTPS Security: Secure communication between users and services is ensured through Traefik’s integration with Let's Encrypt for automatic certificate management.
