Skip to content

Conversation

@DaMandal0rian
Copy link
Contributor

@DaMandal0rian DaMandal0rian commented Feb 15, 2025

PR Type

enhancement, configuration changes, dependencies


Description

  • Reorganized Terraform files for better structure and clarity.

  • Added configurations for GitHub runners, DNS records, and network setups.


Changes walkthrough 📝

Relevant files
Enhancement
5 files
main.tf
Added configurations for GitHub runners on AWS                     
[link]   
broker.tf
Configured RabbitMQ broker and security groups                     
[link]   
outputs.tf
Added outputs for Auto-Drive and Gateway instances             
[link]   
network.tf
Configured network for GitHub runners                                       
[link]   
main.tf
Added network configurations for EKS clusters                       
[link]   
Configuration changes
10 files
autonomys-xyz.tf
Added DNS records for autonomys.xyz domain                             
[link]   
records.tf
Added multiple DNS records for subspace network                   
[link]   
mailserver_records.tf
Added mailserver DNS records for subspace.network               
[link]   
subspace-foundation.tf
Added DNS records for subspace.foundation domain                 
[link]   
variables.tf
Added variables for EKS Blue environment                                 
[link]   
variables.tf
Added variables for EKS Green environment                               
[link]   
variables.tf
Added variables for Auto-Drive infrastructure                       
[link]   
variables.tf
Added variables for Gemini-3H network                                       
[link]   
variables.tf
Added variables for Taurus network                                             
[link]   
autonomys-net.tf
Added DNS records for autonomys.net domain                             
[link]   
Dependencies
3 files
aws.ubuntu.pkr.hcl
Configured Packer for Ubuntu Jammy AMI builds                       
[link]   
aws.windows.pkr.hcl
Configured Packer for Windows Core 2022 AMI builds             
[link]   
versions.tf
Updated Terraform and AWS provider versions                           
[link]   
Additional files
101 files
terraform.tfvars +0/-20   
terraform.tfvars +0/-20   
terraform.tfvars +0/-4     
backend.tf [link]   
db.tf +2/-2     
main.tf +3/-3     
secret.tf [link]   
backend.tf [link]   
main.tf [link]   
outputs.tf [link]   
variables.tf [link]   
backend.tf [link]   
common.tf [link]   
main.tf +3/-3     
outputs.tf [link]   
terrafrom.tfvars.example [link]   
variables.tf [link]   
README.md [link]   
autonomys.net [link]   
backend.tf [link]   
continuim-records.tf [link]   
continuum-records.tf [link]   
data.tf [link]   
outputs.tf [link]   
providers.tf [link]   
variables.tf [link]   
main.tf [link]   
outputs.tf [link]   
providers.tf [link]   
main.tf [link]   
outputs.tf [link]   
providers.tf [link]   
backend.tf [link]   
outputs.tf [link]   
providers.tf [link]   
secrets.tf [link]   
variables.tf [link]   
versions.tf [link]   
terraform.tfvars.example [link]   
main.tf [link]   
outputs.tf [link]   
bottlerocket_custom.tpl [link]   
variables.tf [link]   
versions.tf [link]   
README.md [link]   
logrotate [link]   
systemd [link]   
backend.tf [link]   
main.tf [link]   
outputs.tf [link]   
variables.tf [link]   
backend.tf [link]   
main.tf [link]   
outputs.tf [link]   
terraform.tfvars.example [link]   
variables.tf [link]   
backend.tf [link]   
main.tf [link]   
outputs.tf [link]   
terraform.tfvars.example [link]   
variables.tf [link]   
backend.tf [link]   
common.tf [link]   
main.tf +3/-3     
outputs.tf [link]   
terrafrom.tfvars.example [link]   
ami.tf [link]   
backend.tf [link]   
.env.linux [link]   
.env.macos [link]   
.env.windows [link]   
outputs.tf [link]   
provider.tf [link]   
cleanup_script_linux.sh [link]   
cleanup_script_macos.sh [link]   
cleanup_script_windows.ps1 [link]   
generate_gh_token.sh [link]   
variables.tf [link]   
backend.tf [link]   
common.tf [link]   
main.tf +3/-3     
outputs.tf [link]   
terrafrom.tfvars.example [link]   
variables.tf [link]   
backend.tf [link]   
main.tf [link]   
outputs.tf [link]   
variables.tf [link]   
post-installer.sh [link]   
bootstrap_win.ps1 [link]   
windows-provisioner.ps1 [link]   
backend.tf [link]   
common.tf [link]   
main.tf +3/-3     
outputs.tf [link]   
terrafrom.tfvars.example [link]   
backend.tf [link]   
dns.tf [link]   
ami.tf [link]   
backend.tf [link]   
Additional files not shown

Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • @github-actions
    Copy link

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 5 🔵🔵🔵🔵🔵
    🧪 No relevant tests
    🔒 Security concerns

    Sensitive information exposure:
    The RabbitMQ password and other sensitive variables are stored in plain text in the Terraform state file. This could lead to security vulnerabilities if the state file is not properly secured. Consider using encrypted storage for sensitive data.

    ⚡ Recommended focus areas for review

    Possible Security Misconfiguration

    The on_failure = continue setting in the provisioner "remote-exec" blocks for multiple resources may lead to incomplete or inconsistent configurations if the provisioner fails. This should be reviewed to ensure it aligns with the desired behavior.

      provisioner "remote-exec" {
        inline = [
          "cloud-init status --wait",
          "export DEBIAN_FRONTEND=noninteractive",
          "sudo apt update -y",
          "sudo apt upgrade -y",
          "sudo apt install make build-essential openssl gnupg gcc protobuf-compiler clang lldb lld unzip pkg-config libssl-dev jq ca-certificates --no-install-recommends -y",
          "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly -y",
          "source \"$HOME/.cargo/env\"",
          # Download runner image
          "mkdir actions-runner && cd actions-runner",
          "curl -o actions-runner-linux-x64-${var.gh_runner_version}.tar.gz -L https://github.com/actions/runner/releases/download/v${var.gh_runner_version}/actions-runner-linux-x64-${var.gh_runner_version}.tar.gz",
          "echo '${lookup(var.gh_runner_checksums, "linux_x86_64", "")} actions-runner-linux-x64-${var.gh_runner_version}.tar.gz' | shasum -a 256 -c",
          "tar xzf ./actions-runner-linux-x64-${var.gh_runner_version}.tar.gz",
          # configure runner
          "echo 'ACTIONS_RUNNER_HOOK_JOB_COMPLETED=/home/${var.ssh_user[0]}/cleanup_script.sh' > .env",
          "./config.sh --url https://github.com/autonomys --token ${var.gh_token} --unattended --name ubuntu-20.04-x86-64 --labels 'self-hosted,ubuntu-20.04-x86-64,Linux,x86-64' --work _work --runasservice",
          "sudo ./svc.sh install ${var.ssh_user[0]}",
          "sudo ./svc.sh start",
          # install monitoring
          "sudo sh -c \"curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh --non-interactive --nightly-channel --claim-rooms ${var.netdata_room} --claim-token ${var.netdata_token} --claim-url https://app.netdata.cloud\"",
    
        ]
    
        on_failure = continue
    
      }
    
      # Setting up the ssh connection
      connection {
        type        = "ssh"
        host        = element(self.*.public_ip, count.index)
        user        = var.ssh_user[0]
        private_key = file("${var.private_key_path}")
        timeout     = "90s"
      }
    
    }
    
    resource "aws_instance" "linux_arm64_runner" {
      count             = length(var.public_subnet_cidrs)
      ami               = data.aws_ami.ubuntu_arm64.image_id
      instance_type     = element(var.instance_type_arm, 0)
      subnet_id         = element(aws_subnet.public_subnets.*.id, count.index)
      availability_zone = var.azs
      # Security Group
      vpc_security_group_ids = ["${aws_security_group.allow_runner.id}"]
      # the Public SSH key
      key_name                    = var.aws_key_name
      associate_public_ip_address = true
      ebs_optimized               = true
      ebs_block_device {
        device_name = "/dev/sda1"
        volume_size = "100"
        volume_type = "gp3"
        iops        = 3000
        throughput  = 250
      }
    
      tags = {
        name       = "gh-linux-arm64-runner"
        role       = "runner"
        os_name    = "ubuntu"
        os_version = "20.04"
        arch       = "arm64"
      }
    
      depends_on = [
        aws_subnet.public_subnets,
        aws_internet_gateway.gw
      ]
    
      lifecycle {
    
        create_before_destroy = true
    
      }
    
      # Github runner installation
      provisioner "remote-exec" {
        inline = [
          "cloud-init status --wait",
          "export DEBIAN_FRONTEND=noninteractive",
          "sudo apt update -y",
          "sudo apt upgrade -y",
          "sudo apt install make build-essential openssl gnupg gcc protobuf-compiler clang lldb lld unzip pkg-config libssl-dev jq ca-certificates --no-install-recommends -y",
          "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly -y",
          "source \"$HOME/.cargo/env\"",
          # Download runner image
          "mkdir actions-runner && cd actions-runner",
          "curl -o actions-runner-linux-arm64-${var.gh_runner_version}.tar.gz -L https://github.com/actions/runner/releases/download/v${var.gh_runner_version}/actions-runner-linux-arm64-${var.gh_runner_version}.tar.gz",
          "echo '${lookup(var.gh_runner_checksums, "linux_arm64", "")} actions-runner-linux-arm64-${var.gh_runner_version}.tar.gz' | shasum -a 256 -c",
          "tar xzf ./actions-runner-linux-arm64-${var.gh_runner_version}.tar.gz",
          # configure runner
          "echo 'ACTIONS_RUNNER_HOOK_JOB_COMPLETED=/home/${var.ssh_user[0]}/cleanup_script.sh' > .env",
          "./config.sh --url https://github.com/autonomys --token ${var.gh_token} --unattended --name ubuntu-20.04-arm64 --labels 'self-hosted,ubuntu-20.04-arm64,Linux,arm64' --work _work --runasservice",
          "sudo ./svc.sh install ${var.ssh_user[0]}",
          "sudo ./svc.sh start",
          # install monitoring
          "sudo sh -c \"curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh --non-interactive --nightly-channel --claim-rooms ${var.netdata_room} --claim-token ${var.netdata_token} --claim-url https://app.netdata.cloud\"",
        ]
    
        on_failure = continue
    
      }
    
      # Setting up the ssh connection
      connection {
        type        = "ssh"
        host        = element(self.*.public_ip, count.index)
        user        = var.ssh_user[0]
        private_key = file("${var.private_key_path}")
        timeout     = "90s"
      }
    }
    
    resource "aws_instance" "mac_x86_64_runner" {
      count             = length(var.public_subnet_cidrs)
      ami               = data.aws_ami.mac_x86_64.image_id
      instance_type     = element(var.instance_type_mac, 0)
      subnet_id         = element(aws_subnet.public_subnets.*.id, count.index)
      availability_zone = var.azs
      tenancy           = "host"
      # Security Group
      vpc_security_group_ids = ["${aws_security_group.allow_runner.id}"]
      # the Public SSH key
      key_name                    = var.aws_key_name
      associate_public_ip_address = true
      ebs_optimized               = true
      ebs_block_device {
        device_name = "/dev/sda1"
        volume_size = "100"
        volume_type = "gp3"
        iops        = 3000
        throughput  = 250
      }
    
      tags = {
        name       = "gh-macos-x86-runner"
        role       = "runner"
        os_name    = "macos"
        os_version = "12"
        os_name    = "Monterey"
        arch       = "x86_64"
      }
    
      depends_on = [
        aws_subnet.public_subnets,
        aws_internet_gateway.gw
      ]
    
      lifecycle {
    
        create_before_destroy = true
    
      }
    
      # Github runner installation
      provisioner "remote-exec" {
        inline = [
          "cloud-init status --wait",
          "mkdir actions-runner && cd actions-runner",
          "curl -o actions-runner-osx-x64-${var.gh_runner_version}.tar.gz -L https://github.com/actions/runner/releases/download/v${var.gh_runner_version}/actions-runner-osx-x64-${var.gh_runner_version}.tar.gz",
          "echo '${lookup(var.gh_runner_checksums, "mac_x86_64", "")} actions-runner-osx-x64-${var.gh_runner_version}.tar.gz' | shasum -a 256 -c",
          "tar xzf ./actions-runner-osx-x64-${var.gh_runner_version}.tar.gz",
          "echo 'ACTIONS_RUNNER_HOOK_JOB_COMPLETED=/home/${var.ssh_user[1]}/cleanup_script.sh' > .env",
          "./config.sh --url https://github.com/autonomys --token ${var.gh_token} --unattended --name macos-12-x86-64 --labels 'self-hosted,macos-12-x86-64,macOS,x86-64' --work _work --runasservice",
          "sudo su -- ${var.ssh_user[1]} ./svc.sh install",
          "sudo su -- ${var.ssh_user[1]} ./runsvc.sh start &",
          # install monitoring
          "NONINTERACTIVE=1 /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)\"",
          "brew install netdata protobuf jq yq",
          "xcode-select --install",
          "softwareupdate --install-rosetta",
          "softwareupdate -i -r",
          "netdata -W \"claim -token=${var.netdata_token} -rooms=${var.netdata_room}\" -u ${var.ssh_user[1]} -c /opt/homebrew/var/lib/netdata/cloud.d/cloud.conf",
          "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly -y",
          "source \"$HOME/.cargo/env\"",
        ]
    
        on_failure = continue
    
      }
    
      # Setting up the ssh connection to install the runner server
      connection {
        type        = "ssh"
        host        = element(self.*.public_ip, count.index)
        user        = var.ssh_user[1]
        private_key = file("${var.private_key_path}")
        timeout     = "90s"
      }
    
    }
    
    resource "aws_instance" "mac_arm64_runner" {
      count             = length(var.public_subnet_cidrs)
      ami               = data.aws_ami.mac_arm64.image_id
      instance_type     = element(var.instance_type_mac, 1)
      subnet_id         = element(aws_subnet.public_subnets.*.id, count.index)
      availability_zone = var.azs
      # Security Group
      vpc_security_group_ids = ["${aws_security_group.allow_runner.id}"]
      # the Public SSH key
      key_name                    = var.aws_key_name
      associate_public_ip_address = true
      tenancy                     = "host"
      ebs_optimized               = true
      ebs_block_device {
        device_name = "/dev/sda1"
        volume_size = "100"
        volume_type = "gp3"
        iops        = 3000
        throughput  = 250
      }
      tags = {
        name       = "gh-macos-arm64-runner"
        role       = "runner"
        os_name    = "macos"
        os_version = "12"
        os_name    = "Monterey"
        arch       = "arm64"
      }
    
      depends_on = [
        aws_subnet.public_subnets,
        aws_internet_gateway.gw
      ]
    
      lifecycle {
    
        create_before_destroy = true
    
      }
    
      # Github runner installation
      provisioner "remote-exec" {
        inline = [
          "cloud-init status --wait",
          "mkdir actions-runner && cd actions-runner",
          "curl -o actions-runner-osx-x64-${var.gh_runner_version}.tar.gz -L https://github.com/actions/runner/releases/download/v${var.gh_runner_version}/actions-runner-osx-arm64-${var.gh_runner_version}.tar.gz",
          "echo '${lookup(var.gh_runner_checksums, "mac_arm64", "")}  actions-runner-osx-arm64-${var.gh_runner_version}.tar.gz' | shasum -a 256 -c",
          "tar xzf ./actions-runner-osx-arm64-${var.gh_runner_version}.tar.gz",
          "echo 'ACTIONS_RUNNER_HOOK_JOB_COMPLETED=/home/${var.ssh_user[1]}/cleanup_script.sh' > .env",
          "./config.sh --url https://github.com/autonomys --token ${var.gh_token} --unattended --name macos-12-arm64 --labels 'self-hosted,macos-12-arm64,macOS,arm64' --work _work --runasservice",
          "sudo su -- ${var.ssh_user[1]} ./svc.sh install",
          "sudo su -- ${var.ssh_user[1]} ./runsvc.sh start &",
          # install monitoring
          "NONINTERACTIVE=1 /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)\"",
          "brew install netdata protobuf jq yq",
          "xcode-select --install",
          "softwareupdate --install-rosetta",
          "softwareupdate -i -r",
          "netdata -W \"claim -token=${var.netdata_token} -rooms=${var.netdata_room}\" -u ${var.ssh_user[1]} -c /opt/homebrew/var/lib/netdata/cloud.d/cloud.conf",
          "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly -y",
          "source \"$HOME/.cargo/env\"",
        ]
    
        on_failure = continue
    
    Open Security Group Rules

    The security group allow_runner allows unrestricted access (0.0.0.0/0) for multiple ports, including SSH (22), RDP (3389), and HTTP/HTTPS. This could pose a security risk and should be restricted to specific IP ranges if possible.

      name        = "allow_runner"
      description = "Allow HTTP and HTTPS inbound traffic"
      vpc_id      = aws_vpc.gh-runners.id
    
      ingress {
        description = "HTTPS for VPC"
        from_port   = 443
        to_port     = 443
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }
    
      ingress {
        description = "HTTP for VPC"
        from_port   = 80
        to_port     = 80
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }
    
      ingress {
        description = "SSH for VPC"
        from_port   = 22
        to_port     = 22
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }
    
      ingress {
        description = "WinRM for VPC"
        from_port   = 5985
        to_port     = 5985
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }
    
      ingress {
        description = "RDP for VPC"
        from_port   = 3389
        to_port     = 3389
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }
    
      egress {
        description = "egress for VPC"
        from_port   = 0
        to_port     = 0
        protocol    = "-1"
        cidr_blocks = ["0.0.0.0/0"]
      }
    
      tags = {
        Name = "allow_runner"
      }
    
      depends_on = [
        aws_vpc.gh-runners
      ]
    }
    Sensitive Information Management

    The RabbitMQ password is generated and stored in plain text using random_password. Ensure that this is securely managed and not exposed inadvertently.

    resource "random_password" "rabbitmq_password" {
      length           = 15
      special          = true # Includes special characters
      override_special = "!@#$%^&*()-_=+[]{}<>:?"
    }
    
    variable "private_subnet_cidrs" {
      default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
    }
    
    resource "aws_mq_broker" "rabbitmq_broker_primary" {
      broker_name                = "auto-drive-rabbitmq-broker-primary"
      engine_type                = "RabbitMQ"
      engine_version             = var.rabbitmq_version
      auto_minor_version_upgrade = true
      authentication_strategy    = "simple"
      host_instance_type         = var.rabbitmq_instance_type # t3.micro is the smallest instance type available for Amazon MQ, use mq.m5.large for production
      security_groups            = [aws_security_group.rabbitmq_broker_primary.id]
      deployment_mode            = var.rabbitmq_deployment_mode_staging # change to CLUSTER_MULTI_AZ for production
      storage_type               = "ebs"
      apply_immediately          = true
    
      subnet_ids          = [element(module.vpc.private_subnets, 0)] # Use private subnets from VPC module, in single AZ deployment, use only one subnet, in multi-AZ deployment, use multiple subnets
      publicly_accessible = false
      encryption_options {
        use_aws_owned_key = false
        kms_key_id        = aws_kms_key.mq_kms_key.arn
      }
    
      logs {
        general = true
        audit   = false
      }
    
      maintenance_window_start_time {
        day_of_week = "SUNDAY"
        time_of_day = "03:00"
        time_zone   = "UTC"
      }
    
      user {
        username = var.rabbitmq_username
        password = random_password.rabbitmq_password.result
      }

    @github-actions
    Copy link

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Security
    Enforce IMDSv2 for instance metadata

    Enable http_tokens to be set as required in the metadata_options block to enforce
    the use of Instance Metadata Service Version 2 (IMDSv2) for enhanced security.

    resources/terraform/explorer/terraform/aws/blockscout/taurus/main.tf [64-69]

     metadata_options {
       http_endpoint               = "enabled"
       http_protocol_ipv6          = "disabled"
       http_put_response_hop_limit = 1
    -  http_tokens                 = "optional"
    +  http_tokens                 = "required"
       instance_metadata_tags      = "disabled"
     }
    Suggestion importance[1-10]: 10

    __

    Why: Enforcing IMDSv2 by setting http_tokens to required significantly improves the security of the instance by mitigating potential metadata service vulnerabilities. This is a highly impactful and necessary change.

    High
    Increase password length for security

    Ensure that the random_password resource uses a secure and appropriate length for
    the password, as 15 characters may not meet the security requirements for all
    environments. Consider increasing the length to at least 20 characters for enhanced
    security.

    resources/terraform/auto-drive/broker.tf [1-5]

     resource "random_password" "rabbitmq_password" {
    -  length           = 15
    +  length           = 20
       special          = true # Includes special characters
       override_special = "!@#$%^&*()-_=+[]{}<>:?"
     }
    Suggestion importance[1-10]: 9

    __

    Why: Increasing the password length from 15 to 20 characters enhances security by making brute-force attacks more difficult. This is a valid and impactful improvement for environments requiring strong security measures.

    High
    Secure sensitive values using variables

    Ensure that sensitive information such as the value field in cloudflare_record
    resources (e.g., IP addresses and TXT values) is securely managed using variables or
    secrets to avoid exposing them directly in the codebase.

    resources/terraform/dns/subspace-foundation.tf [1-7]

     resource "cloudflare_record" "subspace_foundation_1" {
       name    = "subspace.foundation"
       proxied = false
       ttl     = 3600
       type    = "A"
    -  value   = "192.64.119.47"
    +  value   = var.subspace_foundation_ip
       zone_id = data.cloudflare_zone.subspace_foundation.id
     }
    Suggestion importance[1-10]: 9

    __

    Why: This suggestion improves security by replacing hardcoded sensitive values with variables, reducing the risk of exposing sensitive information in the codebase. It is highly relevant and directly applicable to the provided code.

    High
    Encrypt private key in Secrets Manager

    Encrypt the private key stored in AWS Secrets Manager to enhance security and
    prevent unauthorized access.

    resources/terraform/eks/network/secrets.tf [13-15]

     resource "aws_secretsmanager_secret_version" "ssh_private_key_version" {
       secret_id     = aws_secretsmanager_secret.ssh_private_key.id
    -  secret_string = tls_private_key.ssh_key.private_key_pem
    +  secret_string = base64encode(tls_private_key.ssh_key.private_key_pem)
     }
    Suggestion importance[1-10]: 9

    __

    Why: Encrypting the private key before storing it in AWS Secrets Manager enhances security by ensuring the key is not stored in plaintext. This is a critical improvement for protecting sensitive data.

    High
    Restrict overly permissive egress rules

    Validate that the aws_security_group egress rule allowing all traffic (0.0.0.0/0) is
    necessary. If not, restrict it to specific IP ranges or ports to minimize potential
    security risks.

    resources/terraform/auto-drive/broker.tf [118-122]

     egress {
       from_port   = 0
       to_port     = 0
       protocol    = "-1"
    -  cidr_blocks = ["0.0.0.0/0"]
    +  cidr_blocks = ["specific_ip_range"]
     }
    Suggestion importance[1-10]: 8

    __

    Why: Restricting the egress rule from allowing all traffic to specific IP ranges or ports reduces the attack surface and enhances security. This suggestion is relevant and addresses a potential security risk effectively.

    Medium
    Restrict overly permissive ingress defaults

    Review the default value for ingress_cidr_blocks set to ["0.0.0.0/0"], as it allows
    unrestricted access. Restrict it to specific IP ranges to enhance security.

    resources/terraform/auto-drive/variables.tf [82-86]

     variable "ingress_cidr_blocks" {
       description = "List of CIDR blocks for ingress"
       type        = list(string)
    -  default     = ["0.0.0.0/0"] # Open to all; adjust as needed
    +  default     = ["specific_ip_range"] # Restrict access
     }
    Suggestion importance[1-10]: 8

    __

    Why: Restricting the default ingress CIDR block from allowing unrestricted access to specific IP ranges improves security by limiting exposure. This is a valid and important enhancement for secure configurations.

    Medium
    General
    Increase TTL for DNS efficiency

    Add a ttl value greater than 1 for the mail record to avoid unnecessary DNS query
    overhead and improve performance.

    resources/terraform/dns/mailserver_records.tf [107-113]

     resource "cloudflare_record" "mail" {
       name    = "mail"
       proxied = true
    -  ttl     = 1
    +  ttl     = 3600
       type    = "CNAME"
       value   = "subspace.network"
       zone_id = data.cloudflare_zone.subspace_network.id
     }
    Suggestion importance[1-10]: 9

    __

    Why: Increasing the TTL from 1 to 3600 reduces unnecessary DNS query overhead and improves performance. This is a significant improvement for efficiency and aligns with best practices.

    High
    Replace hardcoded IPs with variables

    Avoid using hardcoded IP addresses for resources like blog_0 and blog_1. Instead,
    consider using variables or data sources to improve maintainability and flexibility.

    resources/terraform/dns/records.tf [1-9]

     resource "cloudflare_record" "blog_0" {
       comment = "Medium .blog Redirect #2"
       name    = "blog"
       proxied = false
       ttl     = 3600
       type    = "A"
    -  value   = "162.159.152.4"
    +  value   = var.blog_redirect_ip
       zone_id = data.cloudflare_zone.subspace_network.id
     }
    Suggestion importance[1-10]: 8

    __

    Why: Replacing hardcoded IP addresses with variables improves maintainability and flexibility, making the code easier to update and adapt to changes. This is a meaningful enhancement to the PR.

    Medium
    Validate CIDR format for vpc_cidr

    Add validation for the vpc_cidr variable to ensure it follows the correct CIDR
    format, preventing misconfigurations during deployment.

    resources/terraform/eks/network/main.tf [13-16]

     variable "vpc_cidr" {
       description = "CIDR block for VPC"
       type        = string
       default     = "10.0.0.0/16"
    +  validation {
    +    condition     = can(regex("^([0-9]{1,3}\\.){3}[0-9]{1,3}/[0-9]{1,2}$", var.vpc_cidr))
    +    error_message = "The VPC CIDR must be a valid CIDR block."
    +  }
     }
    Suggestion importance[1-10]: 8

    __

    Why: Adding validation for the vpc_cidr variable ensures that only valid CIDR blocks are used, preventing potential misconfigurations. This is a practical enhancement to the code's robustness.

    Medium

    @DaMandal0rian
    Copy link
    Contributor Author

    closes #418

    @DaMandal0rian DaMandal0rian merged commit 0f93958 into main Feb 15, 2025
    1 check passed
    @DaMandal0rian DaMandal0rian deleted the reorg-repo branch February 15, 2025 14:15
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    3 participants