Cluster IQ is a tool for making stock of the Openshift Clusters and its resources running on the most common cloud providers and collects relevant information about the compute resources, access routes and billing.
Metrics and monitoring is not part of the scope of this project, the main purpose is to maintain and updated inventory of the clusters and offer a easier way to identify, manage, and estimate costs.
The scope of the project is to cover make stock on the most common public cloud providers, but as the component dedicated to scrape data is decoupled, more providers could be included in the future.
The following table shows the compatibility matrix and which features are available for every cloud provider:
Cloud Provider | Compute Resources | Billing | Actions | Scheduled Actions |
---|---|---|---|---|
AWS | Yes | Yes | Yes | Yes |
Azure | No | No | No | No |
GCP | No | No | No | No |
The following graph shows the architecture of this project:
The following documentation is available:
- Events Documentation - Event flows and sequence diagrams
- Development Setup - Local development guide
This section explains how to deploy ClusterIQ and ClusterIQ Console.
Before configuring credentials for ClusterIQ, it is recommended to access the user and permission management service and create a dedicated user exclusively for ClusterIQ. This user should have the minimum necessary permissions to function properly. This approach enhances the security of your public cloud provider accounts by enforcing the principle of least privilege.
Each Cloud Provider has a different way for configuring users and permissions. Before continuing, check and follow the steps for each Cloud Provider you want to configure:
- Amazon Web Services (AWS)
- Microsoft Azure Not available.
- Google cloud Platform: Not available.
-
Create a folder called
secrets
for saving the cloud credentials. This folder is ignored on this repo to keep your credentials safe.mkdir secrets export CLUSTER_IQ_CREDENTIALS_FILE="./secrets/credentials"
⚠️ Please take care and don't include them on the repo. -
Create your credentials file with the AWS credentials of the accounts you want to scrape. The file must follow the following format:
echo " [ACCOUNT_NAME] provider = {aws/gcp/azure} user = XXXXXXX key = YYYYYYY billing_enabled = {true/false} " >> $CLUSTER_IQ_CREDENTIALS_FILE
⚠️ The values forprovider
are:aws
,gcp
andazure
, but the scraping is only supported foraws
by the moment. The credentials file should be placed on the pathsecrets/*
to work withdocker/podman-compose
.❗ This file structure was design to be generic, but it works differently depending on the cloud provider. For AWS,
user
refers to theACCESS_KEY
, andkey
refers toSECRET_ACCESS_KEY
.❗ Some Cloud Providers has extra costs when querying the Billing APIs (like AWS Cost Explorer). Be careful when enable this module. Check your account before enabling it.
Since version 0.3, ClusterIQ includes its own Helm Chart placed on
./deployments/helm/cluster-iq
.
For more information about the
supported parameters, check the Configuration Section.
-
Prepare your cluster and CLI
oc login ... export NAMESPACE="cluster-iq" oc new-project $NAMESPACE
-
Create a secret containing this information is needed. To create the secret, use the following command:
oc create secret generic credentials -n $NAMESPACE \ --from-file=credentials=$CLUSTER_IQ_CREDENTIALS_FILE
-
Configure your cluster-iq deployment by modifying the
./deployments/helm/cluster-iq/values.yaml
file. -
Deploy the Helm Chart
helm upgrade cluster-iq ./deployments/helm/cluster-iq/ \ --install \ --namespace $NAMESPACE \ -f ./deployments/helm/cluster-iq/values.yaml
-
Monitor every resource was created correctly:
oc get pods -w -n $NAMESPACE helm list
-
Once every pod is up and running, trigger the scanner manually for initializing the inventory
oc create job --from=cronjob/scanner scanner-init -n $NAMESPACE
For deploying ClusterIQ in local for development purposes, check the following document
Available configuration via Env Vars:
Key | Value | Description |
---|---|---|
CIQ_AGENT_INSTANT_SERVICE_LISTEN_URL | string (Default: "0.0.0.0:50051") | ClusterIQ Agent gRPC listen URL |
CIQ_AGENT_POLLING_SECONDS_INTERVAL | integer (Default: 30) | ClusterIQ Agent polling time (seconds) |
CIQ_AGENT_URL | string (Default: "agent:50051") | ClusterIQ Agent listen URL |
CIQ_API_LISTEN_URL | string (Default: "0.0.0.0:8080") | ClusterIQ API listen URL |
CIQ_API_URL | string (Default: "") | ClusterIQ API public endpoint |
CIQ_AGENT_LISTEN_URL | string (Default: "0.0.0.0:50051") | ClusterIQ Agent listen URL |
CIQ_DB_URL | string (Default: "postgresql://pgsql:5432/clusteriq") | ClusterIQ DB URL |
CIQ_CREDS_FILE | string (Default: "") | Cloud providers accounts credentials file |
CIQ_LOG_LEVEL | string (Default: "INFO") | ClusterIQ Logs verbosity mode |
CIQ_SKIP_NO_OPENSHIFT_INSTANCES | boolean (Default: true) | Skips scanned instances without cluster |
The scanner searches each region for instances (servers) that are part of an Openshift cluster. As each provider and each service has different specifications, the Scanner includes a specific module dedicated to each of them. These modules are automatically activated or deactivated depending on the configured accounts and their configuration.
# Building in a container
make build-scanner
# Building in local
make local-build-scanner
The API server interacts between the UI and the DB.
# Building in a container
make build-api
# Building in local
make local-build-api
The Agent performs actions over the selected cloud resources. It only accepts incoming requests from the API.
Currently, on release v0.4
, the agent only supports Power On/Off clusters on AWS.
# Building in a container
make build-agent
# Building in local
make local-build-agent