-
Notifications
You must be signed in to change notification settings - Fork 14
Experiment Migration Management Command #1721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
7478881
initial
stephherbers 81a8618
DRY it up
stephherbers b813438
clear all fields
stephherbers 3cc9ee0
add custom actions
stephherbers bbbdc26
clean logic
stephherbers 2f03af3
add more to x position for output node
stephherbers 91d5945
Merge branch 'main' into smh/exp-pipeline-migration
stephherbers b8888b5
refactor pipeline create_default
stephherbers a013f22
use _create_pipeline_with_nodes, filter by published vrion and if the…
stephherbers 2398aaa
pass in FlowNode not dict
stephherbers c82bf07
add back in function + lint
stephherbers 123ed5e
lint
stephherbers 4f5256a
fix id logic for start and end nodes
stephherbers 753975c
use iterator on experiments_to_convert
stephherbers a7cf808
combine queries
stephherbers b28598c
have node_id in FlowNodeData match that of FlowNode in migration as d…
stephherbers de28dc0
simplify logic
stephherbers edfae1c
refactor: move experiment info logging to a function so looping is on…
stephherbers a021938
check if one exeriment, and then don't use an iterator
stephherbers File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
223 changes: 223 additions & 0 deletions
223
apps/experiments/management/commands/migrate_nonpipeline_to_pipeline_experiments.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,223 @@ | ||
from uuid import uuid4 | ||
|
||
from django.core.management.base import BaseCommand | ||
from django.db import transaction | ||
from django.db.models import Q | ||
|
||
from apps.experiments.models import Experiment | ||
from apps.pipelines.flow import FlowNode, FlowNodeData | ||
from apps.pipelines.models import Pipeline | ||
from apps.pipelines.nodes.nodes import AssistantNode, LLMResponseWithPrompt | ||
from apps.teams.models import Flag | ||
|
||
|
||
class Command(BaseCommand): | ||
help = "Convert assistant and LLM experiments to pipeline experiments" | ||
|
||
def add_arguments(self, parser): | ||
parser.add_argument( | ||
"--dry-run", | ||
action="store_true", | ||
help="Show what would be converted without making changes", | ||
) | ||
parser.add_argument( | ||
"--team-slug", | ||
type=str, | ||
help="Only convert experiments for a specific team (by slug)", | ||
) | ||
parser.add_argument( | ||
"--experiment-id", | ||
type=int, | ||
help="Convert only a specific experiment by ID", | ||
) | ||
parser.add_argument( | ||
"--chatbots-flag-only", | ||
action="store_true", | ||
help='Only convert experiments for teams that have the "flag_chatbots" feature flag enabled', | ||
) | ||
|
||
def handle(self, *args, **options): | ||
dry_run = options["dry_run"] | ||
team_slug = options.get("team_slug") | ||
experiment_id = options.get("experiment_id") | ||
chatbots_flag_only = options["chatbots_flag_only"] | ||
|
||
query = Q(pipeline__isnull=True) & (Q(assistant__isnull=False) | Q(llm_provider__isnull=False)) | ||
|
||
if team_slug: | ||
query &= Q(team__slug=team_slug) | ||
|
||
if chatbots_flag_only: | ||
chatbots_flag_team_ids = self._get_chatbots_flag_team_ids() | ||
if not chatbots_flag_team_ids: | ||
self.stdout.write(self.style.WARNING('No teams found with the "flag_chatbots" feature flag enabled.')) | ||
return | ||
query &= Q(team_id__in=chatbots_flag_team_ids) | ||
self.stdout.write(f"Filtering to teams with 'flag_chatbots' FF ({len(chatbots_flag_team_ids)} teams)") | ||
|
||
if experiment_id: | ||
query &= Q(id=experiment_id) | ||
experiment = ( | ||
Experiment.objects.filter(query) | ||
.select_related("team", "assistant", "llm_provider", "llm_provider_model") | ||
.first() | ||
) | ||
if not experiment: | ||
self.stdout.write( | ||
self.style.WARNING(f"Experiment {experiment_id} not found or does not need migration.") | ||
) | ||
return | ||
|
||
if not (experiment.is_default_version or experiment.is_working_version): | ||
self.stdout.write( | ||
self.style.WARNING( | ||
f"Experiment {experiment_id} is not a published or unreleased version so does not\ | ||
require migration." | ||
) | ||
) | ||
return | ||
|
||
experiments_to_convert = experiment | ||
experiment_count = 1 | ||
else: | ||
default_experiments = Experiment.objects.filter(query & Q(is_default_version=True)) | ||
default_working_version_ids = default_experiments.exclude(working_version__isnull=True).values_list( | ||
"working_version_id", flat=True | ||
) | ||
working_experiments = Experiment.objects.filter(query & Q(working_version__isnull=True)).exclude( | ||
id__in=default_working_version_ids | ||
) | ||
combined_ids = list(default_experiments.union(working_experiments).values_list("id", flat=True)) | ||
experiments_to_convert = Experiment.objects.filter(id__in=combined_ids).select_related( | ||
"team", "assistant", "llm_provider", "llm_provider_model" | ||
) | ||
experiment_count = experiments_to_convert.count() | ||
|
||
if not experiment_count: | ||
self.stdout.write(self.style.WARNING("No matching experiments found.")) | ||
return | ||
|
||
self.stdout.write(f"Found {experiment_count} experiments to migrate:") | ||
|
||
if dry_run: | ||
if experiment_count == 1: | ||
self._log_experiment_info(experiments_to_convert) | ||
else: | ||
for experiment in experiments_to_convert.iterator(20): | ||
self._log_experiment_info(experiment) | ||
self.stdout.write(self.style.WARNING("\nDry run - no changes will be made.")) | ||
return | ||
|
||
confirm = input("\nContinue? (y/N): ") | ||
if confirm.lower() != "y": | ||
self.stdout.write("Cancelled.") | ||
return | ||
|
||
converted_count = 0 | ||
failed_count = 0 | ||
if experiment_count == 1: | ||
converted_count, failed_count = self._process_expriment( | ||
experiments_to_convert, converted_count, failed_count | ||
) | ||
else: | ||
for experiment in experiments_to_convert.iterator(20): | ||
converted_count, failed_count = self._process_expriment(experiment, converted_count, failed_count) | ||
self.stdout.write( | ||
self.style.SUCCESS(f"\nMigration is complete!: {converted_count} succeeded, {failed_count} failed") | ||
) | ||
|
||
def _process_expriment(self, experiment, converted_count, failed_count): | ||
self._log_experiment_info(experiment) | ||
try: | ||
with transaction.atomic(): | ||
self._convert_experiment(experiment) | ||
converted_count += 1 | ||
self.stdout.write(self.style.SUCCESS(f"Success: {experiment.name}")) | ||
except Exception as e: | ||
failed_count += 1 | ||
self.stdout.write(self.style.ERROR(f"FAILED {experiment.name}: {str(e)}")) | ||
return converted_count, failed_count | ||
|
||
def _log_experiment_info(self, experiment): | ||
experiment_type = "Assistant" if experiment.assistant else "LLM" if experiment.llm_provider else "Unknown" | ||
team_info = f"{experiment.team.name} ({experiment.team.slug})" | ||
self.stdout.write(f"{experiment.name} (ID: {experiment.id}) - Type: {experiment_type} - Team: {team_info}") | ||
|
||
def _convert_experiment(self, experiment): | ||
if experiment.assistant: | ||
pipeline = self._create_assistant_pipeline(experiment) | ||
elif experiment.llm_provider: | ||
pipeline = self._create_llm_pipeline(experiment) | ||
else: | ||
raise ValueError(f"Unknown experiment type for experiment {experiment.id}") | ||
|
||
experiment.pipeline = pipeline | ||
experiment.assistant = None | ||
experiment.llm_provider = None | ||
experiment.llm_provider_model = None | ||
|
||
experiment.save() | ||
|
||
def _get_chatbots_flag_team_ids(self): | ||
chatbots_flag = Flag.objects.get(name="flag_chatbots") | ||
return list(chatbots_flag.teams.values_list("id", flat=True)) | ||
|
||
def _create_pipeline_with_node(self, experiment, node_type, node_label, node_params): | ||
snopoke marked this conversation as resolved.
Show resolved
Hide resolved
|
||
"""Create a pipeline with start -> custom_node -> end structure.""" | ||
pipeline_name = f"{experiment.name} Pipeline" | ||
node_id = str(uuid4()) | ||
node = FlowNode( | ||
id=node_id, | ||
type="pipelineNode", | ||
position={"x": 400, "y": 200}, | ||
data=FlowNodeData( | ||
id=node_id, | ||
type=node_type, | ||
label=node_label, | ||
params=node_params, | ||
), | ||
) | ||
|
||
return Pipeline._create_pipeline_with_nodes(team=experiment.team, name=pipeline_name, middle_node=node) | ||
|
||
def _create_llm_pipeline(self, experiment): | ||
"""Create a start -> LLMResponseWithPrompt -> end nodes pipeline for an LLM experiment.""" | ||
llm_params = { | ||
"name": "llm", | ||
"llm_provider_id": experiment.llm_provider.id, | ||
"llm_provider_model_id": experiment.llm_provider_model.id, | ||
"llm_temperature": experiment.temperature, | ||
"history_type": "global", | ||
"history_name": None, | ||
"history_mode": "summarize", | ||
"user_max_token_limit": experiment.llm_provider_model.max_token_limit, | ||
"max_history_length": 10, | ||
"source_material_id": experiment.source_material.id if experiment.source_material else None, | ||
"prompt": experiment.prompt_text or "", | ||
"tools": list(experiment.tools) if experiment.tools else [], | ||
"custom_actions": [ | ||
op.get_model_id(False) | ||
for op in experiment.custom_action_operations.select_related("custom_action").all() | ||
], | ||
"built_in_tools": [], | ||
"tool_config": {}, | ||
} | ||
return self._create_pipeline_with_node( | ||
experiment=experiment, node_type=LLMResponseWithPrompt.__name__, node_label="LLM", node_params=llm_params | ||
) | ||
|
||
def _create_assistant_pipeline(self, experiment): | ||
"""Create a start -> AssistantNode -> end nodes pipeline for an assistant experiment.""" | ||
assistant_params = { | ||
"name": "assistant", | ||
"assistant_id": str(experiment.assistant.id), | ||
"citations_enabled": experiment.citations_enabled, | ||
"input_formatter": experiment.input_formatter or "", | ||
} | ||
|
||
return self._create_pipeline_with_node( | ||
experiment=experiment, | ||
node_type=AssistantNode.__name__, | ||
node_label="OpenAI Assistant", | ||
node_params=assistant_params, | ||
) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be that loading all of these into memory at once could be an issue so it would be prudent to use an iterator: