Skip to content

Conversation

demoncoder-crypto
Copy link

Description

This pull request implements the R-Precision evaluation metric for PySpark within the SparkRankingEvaluation class.

This change is required to provide a standard ranking evaluation metric for recommendation models evaluated using PySpark, addressing feature request #2087 and achieving parity with potential implementations for other frameworks (related to #2086). It allows users to measure the fraction of relevant items within the top R recommendations, where R is the total number of relevant items for a user.

Related Issues

References

  • R-Precision is a standard metric for evaluating ranked retrieval results.

Checklist:

  • I have followed the contribution guidelines and code style for this project.
  • I have added tests covering my contributions.
  • I have updated the documentation accordingly. (Note: Docstrings were added, but no higher-level documentation files were modified).
  • I have signed the commits, e.g. git commit -s -m "your commit message". (Note: The commit was not signed).
  • This PR is being made to staging branch AND NOT TO main branch. (Assuming staging is the target development branch for this repository).

Copy link
Collaborator

@miguelgfierro miguelgfierro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution! Really nice. I have some comments.

Also, could you please send the PR to staging instead of main? https://github.com/recommenders-team/recommenders/blob/main/CONTRIBUTING.md#steps-to-contributing

self.col_rating = col_rating
self.col_prediction = col_prediction
self.threshold = threshold
self.rating_pred_raw = rating_pred # Store raw predictions before processing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.rating_pred_raw is only used in the R-Precision. Either use directly self.rating_pred or do the data treatment internally in the function

@miguelgfierro
Copy link
Collaborator

@anargyri @SimonYansenZhao @loomlike can you please review?

@demoncoder-crypto
Copy link
Author

Ok I am on it

@demoncoder-crypto
Copy link
Author

I have made the changes as requested by you @miguelgfierro, I am currently testing jupyter notebook will provide an update on it very soon.

@miguelgfierro
Copy link
Collaborator

@demoncoder-crypto if you want to add the notebook, I think it would be interesting to extend https://github.com/recommenders-team/recommenders/blob/main/examples/03_evaluate/evaluation.ipynb here we have a large number of metrics. I think it would be valuable to explain what is R-precision and how is different to the other metrics.

@demoncoder-crypto
Copy link
Author

Do you want me to implement r-precision in the evaluation.ipynb you mentioned or do it separate. @miguelgfierro. I will just complete the other issue implementation then look at the jupyter notebook you mentioned. Thanks for the support

@miguelgfierro
Copy link
Collaborator

I think it would make more sense in evaluation.ipynb, because all the metrics are there. I think it would be better for users to understand the differences

@demoncoder-crypto
Copy link
Author

For sure I will do that

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@demoncoder-crypto
Copy link
Author

@miguelgfierro I have implemented the changes as needed, do let me know if they work, I am now pushing the second issue I am working on and I will send the new PR to staging as requested. Thanks

@miguelgfierro miguelgfierro changed the base branch from main to staging May 3, 2025 07:48
Copy link
Collaborator

@miguelgfierro miguelgfierro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the branch from staging to main, there are some errors in the computation

Comment on lines +776 to +780
"#### 2.2.8 R-Precision\n",
"\n",
"R-Precision evaluates the fraction of relevant items among the top R recommended items, where R is the total number of *truly* relevant items for a specific user. It's equivalent to Recall@R.\n",
"\n",
"**Difference from Precision@k:** Precision@k measures relevance within a fixed top *k* items, regardless of the total number of relevant items (R). R-Precision adapts the evaluation depth (*R*) based on the user's specific ground truth, making it potentially more user-centric when the number of relevant items varies significantly across users.\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get an error when trying to compute the notebook:

---------------------------------------------------------------------------
AnalysisException                         Traceback (most recent call last)
Cell In[19], [line 6](vscode-notebook-cell:?execution_count=19&line=6)
      [1](vscode-notebook-cell:?execution_count=19&line=1) # Note: The spark_rank_eval object was initialized with k=3. 
      [2](vscode-notebook-cell:?execution_count=19&line=2) # R-Precision intrinsically uses R (number of relevant items for the user) as the cutoff.
      [3](vscode-notebook-cell:?execution_count=19&line=3) # The 'k' parameter passed during initialization doesn't directly affect R-Precision calculation itself,
      [4](vscode-notebook-cell:?execution_count=19&line=4) # but it might affect how the rating_pred dataframe is pre-processed if relevancy_method relies on k.
      [5](vscode-notebook-cell:?execution_count=19&line=5) # For a direct comparison with other metrics at a fixed k, ensure the underlying data processing is consistent.
----> [6](vscode-notebook-cell:?execution_count=19&line=6) print(f"The R-Precision is {spark_rank_eval.r_precision()}")

File ~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:386, in SparkRankingEvaluation.r_precision(self)
    [384](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:384) # Rank predictions per user
    [385](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:385) window_spec = Window.partitionBy(self.col_user).orderBy(F.col(self.col_prediction).desc())
--> [386](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:386) ranked_preds = self.rating_pred.select(
    [387](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:387)     self.col_user, self.col_item, self.col_prediction
    [388](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:388) ).withColumn("rank", F.row_number().over(window_spec))
    [390](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:390) # Join ranked predictions with R
    [391](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:391) preds_with_r = ranked_preds.join(
    [392](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:392)     ground_truth_with_R.select(self.col_user, "R"), on=self.col_user
    [393](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/MS/review/demoncoder-recommenders/recommenders/evaluation/spark_evaluation.py:393) )

File ~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3227, in DataFrame.select(self, *cols)
   [3182](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3182) def select(self, *cols: "ColumnOrName") -> "DataFrame":  # type: ignore[misc]
   [3183](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3183)     """Projects a set of expressions and returns a new :class:`DataFrame`.
   [3184](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3184) 
   [3185](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3185)     .. versionadded:: 1.3.0
   (...)
   [3225](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3225)     +-----+---+
   [3226](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3226)     """
-> [3227](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3227)     jdf = self._jdf.select(self._jcols(*cols))
   [3228](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3228)     return DataFrame(jdf, self.sparkSession)

File ~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
   [1316](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1316) command = proto.CALL_COMMAND_NAME +\
   [1317](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1317)     self.command_header +\
   [1318](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1318)     args_command +\
   [1319](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1319)     proto.END_COMMAND_PART
   [1321](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1321) answer = self.gateway_client.send_command(command)
-> [1322](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1322) return_value = get_return_value(
   [1323](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1323)     answer, self.gateway_client, self.target_id, self.name)
   [1325](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1325) for temp_arg in temp_args:
   [1326](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1326)     if hasattr(temp_arg, "_detach"):

File ~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:185, in capture_sql_exception.<locals>.deco(*a, **kw)
    [181](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:181) converted = convert_exception(e.java_exception)
    [182](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:182) if not isinstance(converted, UnknownException):
    [183](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:183)     # Hide where the exception came from that shows a non-Pythonic
    [184](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:184)     # JVM exception message.
--> [185](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:185)     raise converted from None
    [186](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:186) else:
    [187](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/miguel/MS/review/demoncoder-recommenders/examples/03_evaluate/~/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:187)     raise

AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `MovieId` cannot be resolved. Did you mean one of the following? [`UserId`, `prediction`].;
'Project [UserId#6L, 'MovieId, 'Rating]
+- Aggregate [UserId#6L], [UserId#6L, collect_list(MovieId#7L, 0, 0) AS prediction#237]
   +- Filter (rank#227 <= 3)
      +- Project [UserId#6L, MovieId#7L, Rating#8L, rank#227]
         +- Project [UserId#6L, MovieId#7L, Rating#8L, rank#227, rank#227]
            +- Window [row_number() windowspecdefinition(UserId#6L, Rating#8L DESC NULLS LAST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS rank#227], [UserId#6L], [Rating#8L DESC NULLS LAST]
               +- Project [UserId#6L, MovieId#7L, Rating#8L]
                  +- LogicalRDD [UserId#6L, MovieId#7L, Rating#8L], false

assert evaluator.serendipity() == pytest.approx(0.4028, TOL)


@pytest.mark.spark
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same with the tests:

$ pytest tests/unit/recommenders/evaluation/test_spark_evaluation.py 
======================================================================== test session starts =========================================================================
platform linux -- Python 3.11.11, pytest-8.2.2, pluggy-1.5.0
rootdir: /home/miguel/MS/review/demoncoder-recommenders
configfile: pyproject.toml
plugins: anyio-4.4.0, cov-5.0.0, typeguard-4.3.0, hypothesis-6.104.2, mock-3.14.0
collected 30 items                                                                                                                                                   

tests/unit/recommenders/evaluation/test_spark_evaluation.py .............................F                                                                     [100%]

============================================================================== FAILURES ==============================================================================
_______________________________________________________________________ test_spark_r_precision _______________________________________________________________________

spark_data = (DataFrame[userID: bigint, itemID: bigint, rating: bigint], DataFrame[userID: bigint, itemID: bigint, prediction: bigint, rating: bigint])

    @pytest.mark.spark
    def test_spark_r_precision(spark_data):
        df_true, df_pred = spark_data
    
        # Test perfect prediction (R-Precision should be 1.0)
        evaluator_perfect = SparkRankingEvaluation(df_true, df_true, col_prediction="rating")
>       assert evaluator_perfect.r_precision() == pytest.approx(1.0, TOL)

tests/unit/recommenders/evaluation/test_spark_evaluation.py:526: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
recommenders/evaluation/spark_evaluation.py:386: in r_precision
    ranked_preds = self.rating_pred.select(
../../../anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/dataframe.py:3227: in select
    jdf = self._jdf.select(self._jcols(*cols))
../../../anaconda/envs/recommenders311/lib/python3.11/site-packages/py4j/java_gateway.py:1322: in __call__
    return_value = get_return_value(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

a = ('xro5784', <py4j.clientserver.JavaClient object at 0x7fb0e286f110>, 'o5697', 'select'), kw = {}, converted = AnalysisException()

    def deco(*a: Any, **kw: Any) -> Any:
        try:
            return f(*a, **kw)
        except Py4JJavaError as e:
            converted = convert_exception(e.java_exception)
            if not isinstance(converted, UnknownException):
                # Hide where the exception came from that shows a non-Pythonic
                # JVM exception message.
>               raise converted from None
E               pyspark.errors.exceptions.captured.AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `itemID` cannot be resolved. Did you mean one of the following? [`userID`, `prediction`].;
E               'Project [userID#4984L, 'itemID, 'rating]
E               +- Aggregate [userID#4984L], [userID#4984L, collect_list(itemID#4985L, 0, 0) AS prediction#5009]
E                  +- Filter (rank#4999 <= 10)
E                     +- Project [userID#4984L, itemID#4985L, rating#4986L, rank#4999]
E                        +- Project [userID#4984L, itemID#4985L, rating#4986L, rank#4999, rank#4999]
E                           +- Window [row_number() windowspecdefinition(userID#4984L, rating#4986L DESC NULLS LAST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS rank#4999], [userID#4984L], [rating#4986L DESC NULLS LAST]
E                              +- Project [userID#4984L, itemID#4985L, rating#4986L]
E                                 +- LogicalRDD [userID#4984L, itemID#4985L, rating#4986L], false

../../../anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py:185: AnalysisException
------------------------------------------------------------------------ Captured stderr call ------------------------------------------------------------------------
                                                                                
========================================================================== warnings summary ==========================================================================
tests/unit/recommenders/evaluation/test_spark_evaluation.py: 164 warnings
  /home/miguel/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/pandas/utils.py:37: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
    if LooseVersion(pandas.__version__) < LooseVersion(minimum_pandas_version):

tests/unit/recommenders/evaluation/test_spark_evaluation.py: 181 warnings
  /home/miguel/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/pandas/conversion.py:485: DeprecationWarning: is_datetime64tz_dtype is deprecated and will be removed in a future version. Check `isinstance(dtype, pd.DatetimeTZDtype)` instead.
    if should_localize and is_datetime64tz_dtype(s.dtype) and s.dt.tz is not None:

tests/unit/recommenders/evaluation/test_spark_evaluation.py: 15 warnings
  /home/miguel/anaconda/envs/recommenders311/lib/python3.11/site-packages/pyspark/sql/context.py:158: FutureWarning: Deprecated in 3.0.0. Use SparkSession.builder.getOrCreate() instead.
    warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
====================================================================== short test summary info =======================================================================
FAILED tests/unit/recommenders/evaluation/test_spark_evaluation.py::test_spark_r_precision - pyspark.errors.exceptions.captured.AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `itemID` cannot be resolved. D...
======================================================= 1 failed, 29 passed, 360 warnings in 247.48s (0:04:07) =======================================================

"output_type": "stream",
"text": [
"\r",
"\r\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when you submit the fixed version, could you please run the whole notebook so people can see the results?

Please use python 3.11 which is the latest one we support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants