Skip to content

Conversation

@shrutipatel31
Copy link
Contributor

Summary:
This diff updates the get_trace function in ax/service/utils/best_point.py to support preference learning (BOPE) experiments with PreferenceOptimizationConfig.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, get_trace now:

  1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
  2. Uses the learned preference model to predict utility values for each arm's metric values
  3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Adds _compute_utility_from_preference_model() helper function and corresponding unit tests.

Differential Revision: D91073267

@meta-codesync
Copy link

meta-codesync bot commented Jan 21, 2026

@shrutipatel31 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D91073267.

@meta-cla meta-cla bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Jan 21, 2026
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Jan 21, 2026
…facebook#4792)

Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Adds `_compute_utility_from_preference_model()` helper function and corresponding unit tests.

Differential Revision: D91073267
@codecov-commenter
Copy link

codecov-commenter commented Jan 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 96.74%. Comparing base (0804153) to head (c4e22e0).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4792      +/-   ##
==========================================
+ Coverage   96.71%   96.74%   +0.02%     
==========================================
  Files         586      587       +1     
  Lines       61307    61444     +137     
==========================================
+ Hits        59295    59442     +147     
+ Misses       2012     2002      -10     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Jan 22, 2026
…facebook#4792)

Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

Adds `_compute_utility_from_preference_model()` helper function and corresponding unit tests.

Differential Revision: D91073267
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Jan 22, 2026
…facebook#4792)

Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

Adds `_compute_utility_from_preference_model()` helper function and corresponding unit tests.

Differential Revision: D91073267
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Jan 22, 2026
…facebook#4792)

Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

Adds `_compute_utility_from_preference_model()` helper function and corresponding unit tests.

Differential Revision: D91073267
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Jan 22, 2026
…facebook#4792)

Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

Adds `_compute_utility_from_preference_model()` helper function and corresponding unit tests.

Differential Revision: D91073267
…ok#4553)

Summary:

This diff prepares `get_preference_adapter()` and `PreferenceAdapter` for upcoming utility computation changes in `get_trace()` for *BOPE* experiments.

**Problem**: The subsequent diff will call `get_preference_adapter()` from `get_trace()` to compute utility-based traces for *BOPE* experiments. Without these guards, the following issues would occur:

*Empty data crashes*: If `get_trace()` is called on a *BOPE* experiment before any preference comparisons are collected, `get_preference_adapter()` would attempt to fit a *PairwiseGP* model on empty data, causing cryptic model fitting errors.

*Early iteration failures*: During early *BOPE* experiment iterations (before users provide preference feedback), `PreferenceAdapter.gen()` would fail when trying to update the preference model with no data.

**Changes**:

- `get_preference_adapter()` now raises `DataRequiredError` with a clear message when preference data is empty.
- `PreferenceAdapter.gen()` skips preference model updates when `pe_data.df` is empty, allowing early iterations to proceed.
- Adds `fit_tracking_metrics`=False to ensure the adapter only fits the *PairwiseGP* on preference labels (*PAIRWISE_PREFERENCE_QUERY*). Without this, the adapter would also try to fit surrogate models for the outcome metrics (e.g., m1, m2) which exist in the *PE* experiment's search space as parameters but should not be modeled as outcomes. This requires `optimization_config` to specify which metrics to use.
- Registers the preference metric on the experiment if not already present, which is needed when *PE* experiments are loaded from storage without the metric registered.

Differential Revision: D87347126
…facebook#4792)

Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

Adds `_compute_utility_from_preference_model()` helper function and corresponding unit tests.

Differential Revision: D91073267
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants