Skip to content

Use Robusta AI by default #391

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 31 commits into
base: master
Choose a base branch
from
Open

Use Robusta AI by default #391

wants to merge 31 commits into from

Conversation

itisallgood
Copy link
Contributor

@itisallgood itisallgood commented May 1, 2025

Summary by CodeRabbit

  • New Features
    • Added new optional fields for model name and last updated timestamp to AI-related responses.
  • Improvements
    • Enhanced flexibility and fallback options for AI model selection and API key management.
    • Default AI model name and activation settings are now configurable via environment variables.
  • Refactor
    • Consolidated and streamlined how Robusta API keys are loaded, moving from a standalone function to a configuration method.
    • Introduced a centralized language model selector to unify model instantiation logic.
  • Tests
    • Added comprehensive tests for the new language model selector logic.
    • Updated tests to reflect changes in how API keys are loaded and mocked.

Copy link
Contributor

coderabbitai bot commented May 14, 2025

Summary by CodeRabbit

  • New Features
    • Added new optional fields for model name and last updated timestamp to AI-related responses.
  • Improvements
    • Enhanced flexibility and fallback options for AI model selection and API key management.
    • Default AI model name and activation settings are now configurable via environment variables.
  • Refactor
    • Consolidated and streamlined how Robusta API keys are loaded, moving from a standalone function to a configuration method.
    • Introduced a centralized language model selector to unify model instantiation logic.
  • Tests
    • Added comprehensive tests for the new language model selector logic.
    • Updated tests to reflect changes in how API keys are loaded and mocked.

Summary by CodeRabbit

  • New Features
    • Added new optional fields for model name and last updated timestamp to AI-related responses.
  • Improvements
    • Enhanced flexibility and fallback options for AI model selection and API key management.
    • Default AI model name and activation settings are now configurable via environment variables.
  • Refactor
    • Consolidated and streamlined how Robusta API keys are loaded, moving from a standalone function to a configuration method.
    • Introduced a centralized language model selector to unify model instantiation logic.
  • Tests
    • Added comprehensive tests for the new language model selector logic.
    • Updated tests to reflect changes in how API keys are loaded and mocked.

Walkthrough

The changes refactor how the Robusta AI API key is loaded, moving from a standalone utility function to a method on the Config class. Additional environment variables and model selection logic are introduced, with new and updated fields in configuration and data models to support fallback mechanisms and timestamping. Related imports and test mocks are updated accordingly.

Changes

File(s) Change Summary
holmes/clients/robusta_client.py Added optional fields robusta_ai_model_name and last_updated_at to HolmesInfo; updated fetch logic to set timestamp and include model name fallback.
holmes/common/env_vars.py Changed ROBUSTA_AI default to True; added ROBUSTA_AI_MODEL_NAME_FALLBACK with default "gpt-4o".
holmes/config.py Removed default model value; refactored _get_llm to use new LLMSelector class; added load_robusta_api_key method; updated imports.
holmes/core/investigation.py Replaced calls to removed standalone load_robusta_api_key function with config.load_robusta_api_key method.
holmes/utils/robusta.py Deleted file containing standalone load_robusta_api_key function.
server.py Updated API endpoint handlers to call config.load_robusta_api_key method instead of removed utility function.
tests/test_server_endpoints.py Updated patch decorators to mock Config.load_robusta_api_key method instead of removed utility function.
holmes/llm_selector.py Added new LLMSelector class to centralize LLM selection logic with fallback to Robusta AI model if enabled.
tests/test_llm_selector.py Added comprehensive unit tests for LLMSelector covering multiple selection scenarios and fallback logic.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Server
    participant Config
    participant DAL as SupabaseDal

    Client->>Server: API Request (e.g., /api/chat)
    Server->>Config: load_robusta_api_key(dal)
    alt ROBUSTA_AI enabled and no API key/model configured
        Config->>DAL: get_ai_credentials()
        DAL-->>Config: Returns credentials
        Config-->>Server: Sets api_key
    else Not needed
        Config-->>Server: No action
    end
    Server->>Config: _get_llm()
    Config->>LLMSelector: select_llm(model_key)
    LLMSelector-->>Config: Returns LLM instance
    Config-->>Server: Returns LLM instance
    Server-->>Client: Response
Loading

Suggested reviewers

  • moshemorad
  • nherment
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
holmes/config.py (2)

38-47: Consolidate duplicate imports from holmes.common.env_vars

Two consecutive from holmes.common.env_vars import … statements import from the same module. Merging them improves readability and avoids accidental import-order bugs.

-from holmes.common.env_vars import (
-    ROBUSTA_CONFIG_PATH,
-    ROBUSTA_AI,
-    ROBUSTA_API_ENDPOINT,
-    ROBUSTA_AI_MODEL_NAME_BACKOFF,
-)
-from holmes.common.env_vars import load_bool
+from holmes.common.env_vars import (
+    ROBUSTA_CONFIG_PATH,
+    ROBUSTA_AI,
+    ROBUSTA_API_ENDPOINT,
+    ROBUSTA_AI_MODEL_NAME_BACKOFF,
+    load_bool,
+)

699-707: Simplify and widen load_robusta_api_key trigger conditions

  1. You already import the constant ROBUSTA_AI; using load_bool("ROBUSTA_AI", True) introduces a second source of truth that could diverge if the env-var is mutated after module import.
  2. The guard and not self.model prevents credential loading when a Robusta-specific model name is explicitly set via --model, which is a valid use-case.
-        if (
-            load_bool("ROBUSTA_AI", True)
-            and not self.api_key
-            and not self.model
-            and not self._model_list
-        ):
+        if ROBUSTA_AI and not self.api_key:

This keeps the behaviour intuitive: “If Robusta support is enabled and no key is set, fetch one.”
You can still fine-tune the predicate if additional constraints are required.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ff1012 and eb9ec69.

📒 Files selected for processing (6)
  • holmes/clients/robusta_client.py (2 hunks)
  • holmes/common/env_vars.py (2 hunks)
  • holmes/config.py (4 hunks)
  • holmes/core/investigation.py (2 hunks)
  • holmes/utils/robusta.py (0 hunks)
  • server.py (5 hunks)
💤 Files with no reviewable changes (1)
  • holmes/utils/robusta.py
🧰 Additional context used
🧬 Code Graph Analysis (3)
holmes/core/investigation.py (1)
holmes/config.py (1)
  • load_robusta_api_key (699-707)
server.py (1)
holmes/config.py (1)
  • load_robusta_api_key (699-707)
holmes/config.py (4)
holmes/utils/definitions.py (1)
  • RobustaConfig (11-13)
holmes/common/env_vars.py (1)
  • load_bool (5-7)
holmes/core/llm.py (1)
  • DefaultLLM (61-250)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (411-423)
🪛 GitHub Actions: Build and test HolmesGPT
holmes/common/env_vars.py

[error] 42-42: Ruff-format hook reformatted this file. Pre-commit hook made changes indicating formatting issues. Run 'ruff --fix' or 'pre-commit run --all-files' locally to fix.

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build (3.12)
🔇 Additional comments (13)
holmes/common/env_vars.py (2)

25-25: Enabling Robusta AI by default

The default for ROBUSTA_AI environment variable has been changed from False to True, making Robusta AI enabled by default. This aligns with the PR objective to use Robusta AI by default.


42-44: Added fallback model configuration

The new environment variable ROBUSTA_AI_MODEL_NAME_BACKOFF with default value "gpt-4o" provides a fallback model for Robusta AI. This is good practice to ensure the system always has a reliable model to fall back to if the preferred model is unavailable.

🧰 Tools
🪛 GitHub Actions: Build and test HolmesGPT

[error] 42-42: Ruff-format hook reformatted this file. Pre-commit hook made changes indicating formatting issues. Run 'ruff --fix' or 'pre-commit run --all-files' locally to fix.

holmes/core/investigation.py (2)

27-27: Replaced standalone function with Config method

Changed from using the standalone load_robusta_api_key() utility function to using the method config.load_robusta_api_key(dal=dal) directly on the Config object. This refactor improves encapsulation by making the Config class responsible for its own API key loading.


75-75: Replaced standalone function with Config method

Similar to line 27, the code now uses the config.load_robusta_api_key(dal=dal) method instead of a standalone utility function. This provides consistency throughout the codebase.

server.py (5)

179-179: Replaced standalone function with Config method

Changed from using the standalone load_robusta_api_key() utility function to using the method config.load_robusta_api_key(dal=dal) directly on the Config object in the workload_health_check endpoint. This maintains consistency with the refactoring approach.


237-237: Replaced standalone function with Config method

The code now uses config.load_robusta_api_key(dal=dal) in the workload_health_conversation endpoint, continuing the pattern of encapsulating API key loading within the Config class.


260-260: Replaced standalone function with Config method

Similarly, updated the deprecated issue_conversation_deprecated endpoint to use the Config class method for loading the Robusta API key.


278-278: Replaced standalone function with Config method

Updated the issue_conversation endpoint to use the Config class method for API key loading, maintaining consistency throughout the API handlers.


312-312: Replaced standalone function with Config method

The chat endpoint has been updated to use config.load_robusta_api_key(dal=dal), completing the refactoring pattern across all relevant API endpoints.

holmes/clients/robusta_client.py (3)

2-2: Added required imports for new functionality

Added imports for datetime and ROBUSTA_AI_MODEL_NAME_BACKOFF to support the new timestamp feature and model fallback field.

Also applies to: 6-6


15-16: Enhanced HolmesInfo model with additional fields

Added two new fields to the HolmesInfo model:

  • robusta_ai_model_name: Stores the fallback model name with a default value from environment variables
  • last_updated_at: Tracks when the Holmes info was last updated

These fields improve the model selection logic and provide important metadata for operations.


24-24: Added timestamp to fetched Holmes info

The function now adds the current timestamp in ISO format to the result before constructing the HolmesInfo instance. This provides valuable metadata about when the information was retrieved.

holmes/config.py (1)

177-180: robusta_ai_model field appears unused

The new robusta_ai_model attribute is added but is never referenced in the class (selection logic relies on self._holmes_info.robusta_ai_model_name).
Double-check whether this field is required or if it should be removed / wired into _get_llm to prevent dead code and confusion.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eb9ec69 and 9f38be2.

📒 Files selected for processing (2)
  • holmes/common/env_vars.py (2 hunks)
  • holmes/config.py (4 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
holmes/common/env_vars.py

25-25: Boolean positional value in function call

(FBT003)


43-43: Trailing comma missing

Add trailing comma

(COM812)

holmes/config.py

663-663: Use dict instead of Dict for type annotation

Replace with dict

(UP006)


666-666: Logging statement uses f-string

(G004)


675-675: Logging statement uses f-string

(G004)


687-691: Avoid specifying long messages outside the exception class

(TRY003)


690-690: Trailing comma missing

Add trailing comma

(COM812)


695-698: Avoid specifying long messages outside the exception class

(TRY003)


697-697: Trailing comma missing

Add trailing comma

(COM812)


701-701: Avoid specifying long messages outside the exception class

(TRY003)


705-705: Missing return type annotation for public function load_robusta_api_key

Add return type annotation: None

(ANN201)


707-707: Boolean positional value in function call

(FBT003)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.9)
🔇 Additional comments (1)
holmes/config.py (1)

177-180: robusta_ai_model field is unused – remove or integrate

The new attribute robusta_ai_model is never referenced in the class.
If it’s meant to influence _get_llm, consider wiring it there; otherwise drop it to avoid dead code.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
holmes/common/env_vars.py (1)

25-25: Use explicit keyword to silence Ruff FBT003 (duplicate reminder)

load_bool("ROBUSTA_AI", True) still passes a bare boolean positional argument, triggering the same Ruff FBT003 warning mentioned in a previous review.
Switch to a keyword argument to fix the lint error and improve readability.

-ROBUSTA_AI = load_bool("ROBUSTA_AI", True)
+ROBUSTA_AI = load_bool("ROBUSTA_AI", default=True)
🧰 Tools
🪛 Ruff (0.8.2)

25-25: Boolean positional value in function call

(FBT003)

holmes/config.py (2)

38-43: Consolidate duplicate imports from the same module (repeat issue)

Lines 38-43 import several names from holmes.common.env_vars, and line 47 imports load_bool from the exact same module.
Merging them into one from … import (…) statement avoids duplication and keeps the import section tidy.

-from holmes.common.env_vars import (
-    ROBUSTA_CONFIG_PATH,
-    ROBUSTA_AI,
-    ROBUSTA_API_ENDPOINT,
-    ROBUSTA_AI_MODEL_NAME_BACKOFF,
-)
-
-from holmes.common.env_vars import load_bool
+from holmes.common.env_vars import (
+    ROBUSTA_CONFIG_PATH,
+    ROBUSTA_AI,
+    ROBUSTA_API_ENDPOINT,
+    ROBUSTA_AI_MODEL_NAME_BACKOFF,
+    load_bool,
+)

Also applies to: 47-47


701-708: 🛠️ Refactor suggestion

Reuse cached ROBUSTA_AI constant & add return-type annotation

The guard re-parses the env var via load_bool, risking divergence if it mutates during runtime, and the method lacks a return-type annotation flagged by Ruff ANN201.

-    def load_robusta_api_key(self, dal: SupabaseDal):
-        if (
-            load_bool("ROBUSTA_AI", True)
+    def load_robusta_api_key(self, dal: SupabaseDal) -> None:
+        if (
+            ROBUSTA_AI
             and not self.api_key

This also removes the FBT003 offence inside the function.

🧰 Tools
🪛 Ruff (0.8.2)

701-701: Missing return type annotation for public function load_robusta_api_key

Add return type annotation: None

(ANN201)


703-703: Boolean positional value in function call

(FBT003)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9f38be2 and f987500.

📒 Files selected for processing (2)
  • holmes/common/env_vars.py (2 hunks)
  • holmes/config.py (4 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
holmes/common/env_vars.py

25-25: Boolean positional value in function call

(FBT003)

holmes/config.py

659-659: Use dict instead of Dict for type annotation

Replace with dict

(UP006)


662-662: Logging statement uses f-string

(G004)


671-671: Logging statement uses f-string

(G004)


683-687: Avoid specifying long messages outside the exception class

(TRY003)


686-686: Trailing comma missing

Add trailing comma

(COM812)


691-694: Avoid specifying long messages outside the exception class

(TRY003)


693-693: Trailing comma missing

Add trailing comma

(COM812)


697-697: Avoid specifying long messages outside the exception class

(TRY003)


701-701: Missing return type annotation for public function load_robusta_api_key

Add return type annotation: None

(ANN201)


703-703: Boolean positional value in function call

(FBT003)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.9)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
holmes/common/env_vars.py (1)

25-25: Prefer keyword argument to silence FBT003 warning

load_bool("ROBUSTA_AI", True) passes a bare boolean positional argument, triggering Ruff's FBT003 warning.
Using an explicit keyword avoids the lint error and improves readability.

-ROBUSTA_AI = load_bool("ROBUSTA_AI", True)
+ROBUSTA_AI = load_bool("ROBUSTA_AI", default=True)
🧰 Tools
🪛 Ruff (0.8.2)

25-25: Boolean positional value in function call

(FBT003)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 07a5fa7 and 0672959.

📒 Files selected for processing (1)
  • holmes/common/env_vars.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
holmes/common/env_vars.py

25-25: Boolean positional value in function call

(FBT003)

⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.9)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.9)
  • GitHub Check: build (3.12)

@itisallgood itisallgood requested a review from nherment May 29, 2025 05:36
@itisallgood itisallgood changed the title Use Robusta AI by default Use Robusta AI by default (DO NOT MERGE YET) May 29, 2025
Copy link
Collaborator

@nherment nherment left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments/questions.
Also, it feels like there is quite a bit of documentation missing for how holmes' LLM(s) can be configured.

holmes/config.py Outdated
api_key = model_params.pop("api_key", api_key)
model = model_params.pop("model", model)
if not self.api_key:
raise ValueError(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a subpar API. Why not call load_robusta_api_key() here?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
holmes/config.py (2)

631-631: Use lazy % formatting in logging functions

The f-string usage in logging statements was flagged in previous reviews. These should use lazy formatting to avoid string construction when logging is disabled.

-logging.debug(f"Using model '{model_key}' from model_list")
+logging.debug("Using model '%s' from model_list", model_key)

-logging.debug(f"Using first model from model_list: '{first_model_key}'")
+logging.debug("Using first model from model_list: '%s'", first_model_key)

Also applies to: 640-640

🧰 Tools
🪛 Pylint (3.3.7)

[warning] 631-631: Use lazy % formatting in logging functions

(W1203)


670-678: Address issues flagged in previous reviews

Two issues from previous reviews still need to be addressed:

  1. Missing return type annotation: Add -> None to satisfy the linter and self-document intent.
  2. Reuse cached constant: Use the already-imported ROBUSTA_AI constant instead of re-evaluating via load_bool.
-def load_robusta_api_key(self, dal: SupabaseDal):
+def load_robusta_api_key(self, dal: SupabaseDal) -> None:
    if (
-        load_bool("ROBUSTA_AI", True)
+        ROBUSTA_AI
        and not self.api_key
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 670-670: Missing function or method docstring

(C0116)

🧹 Nitpick comments (2)
holmes/config.py (2)

49-49: Fix import ordering

The enum.Enum import should be placed before third-party imports according to PEP 8 conventions.

from holmes.core.tools import YAMLToolset
+from enum import Enum
from holmes.common.env_vars import (
    ROBUSTA_CONFIG_PATH,
    ROBUSTA_AI,
    ROBUSTA_API_ENDPOINT,
    ROBUSTA_AI_MODEL_NAME_BACKOFF,
)
from holmes.utils.definitions import RobustaConfig
-from enum import Enum
from holmes.common.env_vars import load_bool
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 49-49: standard import "enum.Enum" should be placed before third party imports "yaml", "pydantic.FilePath", "pydash.arrays.concat", "pydantic.ValidationError" and first party imports "holmes.config_utils.replace_env_vars_values", "holmes.utils.file_utils.load_yaml_file", "holmes.get_version" (...) "holmes.core.tools.YAMLToolset", "holmes.common.env_vars.ROBUSTA_CONFIG_PATH", "holmes.utils.definitions.RobustaConfig"

(C0411)


670-670: Add docstring for the new method

The load_robusta_api_key method is missing a docstring to explain its purpose and usage.

def load_robusta_api_key(self, dal: SupabaseDal) -> None:
+    """
+    Load Robusta AI API key from Supabase when ROBUSTA_AI is enabled
+    and no API key or model is configured.
+    
+    Args:
+        dal: SupabaseDal instance to retrieve AI credentials from
+    """
    if (
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 670-670: Missing function or method docstring

(C0116)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ad96be8 and 188fa76.

📒 Files selected for processing (1)
  • holmes/config.py (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
holmes/config.py (3)
holmes/common/env_vars.py (1)
  • load_bool (5-7)
holmes/core/llm.py (1)
  • DefaultLLM (61-250)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (417-429)
🪛 Pylint (3.3.7)
holmes/config.py

[convention] 49-49: standard import "enum.Enum" should be placed before third party imports "yaml", "pydantic.FilePath", "pydash.arrays.concat", "pydantic.ValidationError" and first party imports "holmes.config_utils.replace_env_vars_values", "holmes.utils.file_utils.load_yaml_file", "holmes.get_version" (...) "holmes.core.tools.YAMLToolset", "holmes.common.env_vars.ROBUSTA_CONFIG_PATH", "holmes.utils.definitions.RobustaConfig"

(C0411)


[warning] 631-631: Use lazy % formatting in logging functions

(W1203)


[warning] 640-640: Use lazy % formatting in logging functions

(W1203)


[convention] 670-670: Missing function or method docstring

(C0116)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build (3.12)
🔇 Additional comments (2)
holmes/config.py (2)

42-47: Import consolidation looks good!

This addresses the previous review comment about consolidating duplicate imports from the same module. The imports are now properly organized.


145-145: Excellent fix for the fallback logic!

Changing the default from "gpt-4o" to None addresses the previous issue where the Robusta AI fallback was dead code. This allows the model selection logic in _get_llm to work as intended.

@itisallgood itisallgood changed the title Use Robusta AI by default (DO NOT MERGE YET) Use Robusta AI by default Jun 5, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
holmes/config.py (2)

42-47: Consolidate duplicate imports from the same module

The imports from holmes.common.env_vars are still split across two separate statements (lines 42-47 and 50), which was previously flagged for consolidation.

Also applies to: 50-50


670-678: Reuse cached ROBUSTA_AI constant & add return-type annotation

This method still uses load_bool("ROBUSTA_AI", True) instead of the imported ROBUSTA_AI constant and is missing the return type annotation, which were both flagged in previous reviews.

🧹 Nitpick comments (2)
holmes/config.py (2)

628-628: Use builtin dict type annotation

Replace Dict[str, Any] with dict[str, Any] per PEP 585 standards.

-        selected_params: Dict[str, Any] = {}
+        selected_params: dict[str, Any] = {}

631-631: Avoid eager f-string evaluation in logging statements

F-strings in logging statements are evaluated even when debug logging is disabled, causing unnecessary overhead.

-            logging.debug(f"Using model '{model_key}' from model_list")
+            logging.debug("Using model '%s' from model_list", model_key)

-            logging.debug(f"Using first model from model_list: '{first_model_key}'")
+            logging.debug("Using first model from model_list: '%s'", first_model_key)

Also applies to: 640-640

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 188fa76 and b875ba6.

📒 Files selected for processing (3)
  • holmes/clients/robusta_client.py (2 hunks)
  • holmes/common/env_vars.py (2 hunks)
  • holmes/config.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • holmes/common/env_vars.py
  • holmes/clients/robusta_client.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
holmes/config.py (3)
holmes/common/env_vars.py (1)
  • load_bool (5-7)
holmes/core/llm.py (1)
  • DefaultLLM (61-250)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (417-429)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.12)
🔇 Additional comments (2)
holmes/config.py (2)

145-145: LGTM! Model default change enables Robusta AI fallback

Changing the default from "gpt-4o" to None correctly addresses the previous issue where the Robusta AI fallback was unreachable due to the non-null default value.


651-663: LGTM! Improved error handling for Robusta AI fallback

The error handling correctly addresses the previous concern about API key validation. The ValueError with clear messaging will help users understand the required setup steps.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
holmes/config.py (1)

42-47: Consolidate imports from the same module

The imports from holmes.common.env_vars could be consolidated into a single import statement for better readability, as mentioned in previous reviews.

-from holmes.common.env_vars import (
-    ROBUSTA_CONFIG_PATH,
-)
-from holmes.utils.definitions import RobustaConfig
-from enum import Enum
-from holmes.common.env_vars import load_bool
+from holmes.common.env_vars import (
+    ROBUSTA_CONFIG_PATH,
+    load_bool,
+)
+from holmes.utils.definitions import RobustaConfig
+from enum import Enum
🧹 Nitpick comments (4)
holmes/config.py (1)

630-638: Add return type annotation and consider API key validation

The method is missing a return type annotation as noted in previous reviews. Also consider whether additional validation of the retrieved API key format is needed.

-    def load_robusta_api_key(self, dal: SupabaseDal):
+    def load_robusta_api_key(self, dal: SupabaseDal) -> None:
holmes/llm_selector.py (3)

32-32: Use lazy logging format to avoid performance overhead

F-strings in logging statements are evaluated eagerly even when the log level is disabled, causing unnecessary performance overhead.

-            logging.debug(f"Using model '{model_key}' from model_list.")
+            logging.debug("Using model '%s' from model_list.", model_key)

41-43: Use lazy logging format consistently

Same f-string issue as above.

-            logging.debug(
-                f"Using model '{self.default_model_from_config}' from default_model_from_config."
-            )
+            logging.debug(
+                "Using model '%s' from default_model_from_config.", 
+                self.default_model_from_config
+            )

50-52: Use lazy logging format consistently

Same f-string issue as above.

-            logging.debug(
-                f"No explicit model; defaulting to first in model_list: '{first_key}'"
-            )
+            logging.debug(
+                "No explicit model; defaulting to first in model_list: '%s'", 
+                first_key
+            )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b875ba6 and 1a31372.

📒 Files selected for processing (3)
  • holmes/config.py (4 hunks)
  • holmes/llm_selector.py (1 hunks)
  • tests/test_llm_selector.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
holmes/llm_selector.py (1)
holmes/core/llm.py (2)
  • DefaultLLM (61-250)
  • LLM (30-58)
holmes/config.py (4)
holmes/utils/definitions.py (1)
  • RobustaConfig (11-13)
holmes/common/env_vars.py (1)
  • load_bool (5-7)
holmes/llm_selector.py (2)
  • LLMSelector (12-93)
  • select_llm (25-93)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (417-429)
🪛 Pylint (3.3.7)
holmes/llm_selector.py

[refactor] 12-12: Too few public methods (1/2)

(R0903)

tests/test_llm_selector.py

[error] 119-119: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 121-121: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 138-138: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 140-140: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 156-156: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 158-158: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 183-183: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 185-185: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 257-257: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 259-259: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 275-275: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 277-277: Instance of 'DefaultLLM' has no 'params' member

(E1101)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build (3.12)
🔇 Additional comments (6)
holmes/config.py (2)

143-143: Good change: Model default to None enables proper fallback logic

Changing the default from "gpt-4o" to None correctly addresses the previous concern about the Robusta AI fallback being unreachable due to the non-null default.


622-628: Clean refactoring: LLM selection logic properly delegated

The refactoring successfully moves the complex LLM selection logic to the dedicated LLMSelector class, making this method much cleaner and addressing the architectural concern from previous reviews.

holmes/llm_selector.py (2)

12-24: Well-structured class with clear responsibilities

The LLMSelector class properly encapsulates model selection logic and has a clear, focused responsibility. The pylint warning about too few public methods is acceptable here since this class has a single, well-defined purpose.

🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 12-12: Too few public methods (1/2)

(R0903)


63-67: Good error handling for missing API key

The validation properly addresses the previous concern about API key availability when falling back to Robusta AI.

tests/test_llm_selector.py (2)

1-293: Comprehensive test coverage with excellent structure

The test suite provides thorough coverage of all LLMSelector decision branches and edge cases. The use of fixtures and mocking is well-implemented, and the tests validate the entire selection logic flow.

The pylint errors about missing model_name and params attributes are false positives - these attributes exist on the MockLLM class used in the patched tests.

🧰 Tools
🪛 Pylint (3.3.7)

[error] 119-119: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 121-121: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 138-138: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 140-140: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 156-156: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 158-158: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 183-183: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 185-185: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 257-257: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 259-259: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 275-275: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 277-277: Instance of 'DefaultLLM' has no 'params' member

(E1101)


169-186: Creative test approach for attribute fallback behavior

The test for the getattr fallback when robusta_ai_model_name is missing demonstrates good testing practices by properly mocking the builtin getattr function to simulate the fallback scenario.

🧰 Tools
🪛 Pylint (3.3.7)

[error] 183-183: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 185-185: Instance of 'DefaultLLM' has no 'params' member

(E1101)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
holmes/config.py (1)

42-47: Consolidate duplicate imports from the same module.

The imports from holmes.common.env_vars are split across two separate import statements (lines 42-44 and 47), which was flagged in previous reviews.

-from holmes.common.env_vars import (
-    ROBUSTA_CONFIG_PATH,
-)
-from holmes.utils.definitions import RobustaConfig
-from enum import Enum
-from holmes.common.env_vars import load_bool
+from holmes.common.env_vars import (
+    ROBUSTA_CONFIG_PATH,
+    load_bool,
+)
+from holmes.utils.definitions import RobustaConfig
+from enum import Enum
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a31372 and 82011e4.

📒 Files selected for processing (1)
  • holmes/config.py (4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
holmes/config.py (4)
holmes/utils/definitions.py (1)
  • RobustaConfig (11-13)
holmes/common/env_vars.py (1)
  • load_bool (5-7)
holmes/llm_selector.py (2)
  • LLMSelector (12-93)
  • select_llm (25-93)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (417-429)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build (3.12)
🔇 Additional comments (3)
holmes/config.py (3)

11-11: Good refactoring: Import cleanup aligns with architectural changes.

Removing the DefaultLLM import is consistent with delegating LLM instantiation to the LLMSelector class.


143-143: Excellent fix: Enables proper fallback behavior.

Changing the default from "gpt-4o" to None addresses the previous issue where the Robusta AI fallback was effectively dead code. This allows the LLMSelector to properly implement the intended precedence order.


623-629: Excellent architectural improvement: LLM logic properly separated.

The refactoring successfully addresses the previous feedback about moving LLM selection logic to its own module. The delegation to LLMSelector is clean and the parameters are appropriate.

Comment on lines +631 to +639
def load_robusta_api_key(self, dal: SupabaseDal):
if (
load_bool("ROBUSTA_AI", True)
and not self.api_key
and not self.model
and not self._model_list
):
account_id, token = dal.get_ai_credentials()
self.api_key = SecretStr(f"{account_id} {token}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Address return type annotation and improve the conditional logic.

The method has several issues that need attention:

  1. Missing return type annotation (flagged in previous reviews)
  2. Should use imported ROBUSTA_AI constant instead of re-evaluating via load_bool
  3. The condition not self._model_list might be too restrictive - users might have a model list but still want to use Robusta AI
-    def load_robusta_api_key(self, dal: SupabaseDal):
+    def load_robusta_api_key(self, dal: SupabaseDal) -> None:
        if (
-            load_bool("ROBUSTA_AI", True)
+            # Note: Need to import ROBUSTA_AI constant first
+            ROBUSTA_AI
             and not self.api_key
-            and not self.model
-            and not self._model_list
         ):
             account_id, token = dal.get_ai_credentials()
             self.api_key = SecretStr(f"{account_id} {token}")

Note: You'll need to add ROBUSTA_AI to the imports from holmes.common.env_vars to use the constant.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def load_robusta_api_key(self, dal: SupabaseDal):
if (
load_bool("ROBUSTA_AI", True)
and not self.api_key
and not self.model
and not self._model_list
):
account_id, token = dal.get_ai_credentials()
self.api_key = SecretStr(f"{account_id} {token}")
def load_robusta_api_key(self, dal: SupabaseDal) -> None:
if (
# Note: Need to import ROBUSTA_AI constant first
ROBUSTA_AI
and not self.api_key
):
account_id, token = dal.get_ai_credentials()
self.api_key = SecretStr(f"{account_id} {token}")
🤖 Prompt for AI Agents
In holmes/config.py around lines 631 to 639, add a return type annotation to the
load_robusta_api_key method, such as -> None. Replace the
load_bool("ROBUSTA_AI", True) call with the imported ROBUSTA_AI constant from
holmes.common.env_vars. Modify the conditional logic by removing the check for
not self._model_list to avoid being too restrictive, allowing the method to
proceed even if a model list exists. Also, ensure ROBUSTA_AI is imported
properly at the top of the file.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/test_llm_selector.py (1)

26-35: Consider simplifying the completion method signature for testing.

While the current signature matches the real LLM interface, for testing purposes you could simplify this by using **kwargs to reduce the argument count flagged by pylint.

    def completion(
-        self,
-        messages: list[dict[str, Any]],
-        tools: Optional[list[dict[str, Any]]] = None,
-        tool_choice: Optional[str | dict[Any, Any]] = None,
-        response_format: Optional[dict[Any, Any] | type[BaseModel]] = None,
-        temperature: Optional[float] = None,
-        drop_params: Optional[bool] = None,
-        stream: Optional[bool] = None,
+        self,
+        messages: list[dict[str, Any]],
+        **kwargs: Any,
    ) -> ModelResponse | CustomStreamWrapper:
🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 26-26: Too many arguments (8/5)

(R0913)


[refactor] 26-26: Too many positional arguments (8/5)

(R0917)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 82011e4 and e454d14.

📒 Files selected for processing (1)
  • tests/test_llm_selector.py (1 hunks)
🧰 Additional context used
🪛 Pylint (3.3.7)
tests/test_llm_selector.py

[refactor] 26-26: Too many arguments (8/5)

(R0913)


[refactor] 26-26: Too many positional arguments (8/5)

(R0917)


[error] 135-135: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 137-137: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 154-154: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 156-156: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 172-172: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 174-174: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 199-199: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 201-201: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 273-273: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 275-275: Instance of 'DefaultLLM' has no 'params' member

(E1101)


[error] 291-291: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 293-293: Instance of 'DefaultLLM' has no 'params' member

(E1101)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.12)
🔇 Additional comments (7)
tests/test_llm_selector.py (7)

15-44: Well-designed mock class with comprehensive interface coverage.

The MockLLM class effectively simulates the real LLM interface for testing purposes. The implementation correctly includes all necessary methods and attributes that the tests need to verify.

🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 26-26: Too many arguments (8/5)

(R0913)


[refactor] 26-26: Too many positional arguments (8/5)

(R0917)


47-85: Excellent fixture design for comprehensive test coverage.

The fixtures provide good separation of concerns and reusable test data. The llm_selector_fixture composition pattern makes tests more maintainable.


88-94: Thorough test coverage for explicit model key selection.

This test correctly verifies that when a model key is provided with complete configuration, all values are properly applied to the LLM instance.


159-175: Robust testing of Robusta AI fallback mechanism.

This test effectively validates the fallback behavior when Robusta AI is enabled, ensuring the correct model name and base URL are configured.

🧰 Tools
🪛 Pylint (3.3.7)

[error] 172-172: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 174-174: Instance of 'DefaultLLM' has no 'params' member

(E1101)


177-202: Sophisticated test of fallback hierarchy with custom getattr mocking.

The custom mock_getattr_func is a creative solution to test the scenario where robusta_ai_model_name is missing from the Holmes info object. This ensures the fallback constant is used correctly.

🧰 Tools
🪛 Pylint (3.3.7)

[error] 199-199: Instance of 'DefaultLLM' has no 'model_name' member

(E1101)


[error] 201-201: Instance of 'DefaultLLM' has no 'params' member

(E1101)


204-217: Essential error handling test for missing API key.

This test ensures that appropriate validation occurs when Robusta AI is enabled but no API key is available, preventing runtime failures.


296-308: Important edge case testing for malformed configuration.

This test validates that the system properly handles configurations where model entries lack required model names, ensuring graceful error handling.

Copy link
Contributor

github-actions bot commented Jun 6, 2025

Results of HolmesGPT evals

Test suite Test case Status
ask_holmes 01_how_many_pods ⚠️
ask_holmes 02_what_is_wrong_with_pod
ask_holmes 02_what_is_wrong_with_pod_LOKI
ask_holmes 03_what_is_the_command_to_port_forward
ask_holmes 04_related_k8s_events
ask_holmes 05_image_version
ask_holmes 06_explain_issue
ask_holmes 07_high_latency
ask_holmes 07_high_latency_LOKI
ask_holmes 08_sock_shop_frontend
ask_holmes 09_crashpod
ask_holmes 10_image_pull_backoff
ask_holmes 11_init_containers
ask_holmes 12_job_crashing
ask_holmes 12_job_crashing_CORALOGIX
ask_holmes 12_job_crashing_LOKI ⚠️
ask_holmes 13_pending_node_selector
ask_holmes 14_pending_resources
ask_holmes 15_failed_readiness_probe
ask_holmes 16_failed_no_toolset_found
ask_holmes 17_oom_kill
ask_holmes 18_crash_looping_v2
ask_holmes 19_detect_missing_app_details
ask_holmes 20_long_log_file_search_LOKI
ask_holmes 21_job_fail_curl_no_svc_account ⚠️
ask_holmes 22_high_latency_dbi_down
ask_holmes 23_app_error_in_current_logs
ask_holmes 23_app_error_in_current_logs_LOKI ⚠️
ask_holmes 24_misconfigured_pvc
ask_holmes 25_misconfigured_ingress_class ⚠️
ask_holmes 26_multi_container_logs ⚠️
ask_holmes 27_permissions_error_no_helm_tools
ask_holmes 28_permissions_error_helm_tools_enabled
ask_holmes 29_events_from_alert_manager
ask_holmes 30_basic_promql_graph_cluster_memory
ask_holmes 31_basic_promql_graph_pod_memory
ask_holmes 32_basic_promql_graph_pod_cpu
ask_holmes 33_http_latency_graph
ask_holmes 34_memory_graph
ask_holmes 35_tempo
ask_holmes 36_argocd_find_resource
ask_holmes 37_argocd_wrong_namespace ⚠️
ask_holmes 38_rabbitmq_split_head
ask_holmes 39_failed_toolset
ask_holmes 40_disabled_toolset
ask_holmes 41_setup_argo
investigate 01_oom_kill
investigate 02_crashloop_backoff
investigate 03_cpu_throttling
investigate 04_image_pull_backoff
investigate 05_crashpod
investigate 05_crashpod_LOKI
investigate 06_job_failure
investigate 07_job_syntax_error
investigate 08_memory_pressure
investigate 09_high_latency
investigate 10_KubeDeploymentReplicasMismatch
investigate 11_KubePodCrashLooping
investigate 12_KubePodNotReady
investigate 13_Watchdog
investigate 14_tempo

Legend

  • ✅ the test was successful
  • ⚠️ the test failed but is known to be flakky or known to fail
  • ❌ the test failed and should be fixed before merging the PR

@itisallgood itisallgood requested a review from nherment June 6, 2025 11:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants