Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Chapter 10: number of output arguments of model.evaluate(...) for multiple outputs #139

Open
thomas-haslwanter opened this issue Jun 16, 2024 · 1 comment

Comments

@thomas-haslwanter
Copy link

thomas-haslwanter commented Jun 16, 2024

Describe the bug
cell 81 in "10_neural_nets_with_keras.ipynb" brings up an error

To Reproduce

weighted_sum_of_losses, main_loss, aux_loss, main_rmse, aux_rmse = eval_results

And if you got an exception, please copy the full stacktrace here:

ValueError                                Traceback (most recent call last)
Cell In[64], line 2
      1 eval_results = model.evaluate((X_test_wide, X_test_deep), (y_test, y_test))
----> 2 weighted_sum_of_losses, main_loss, aux_loss, main_rmse, aux_rmse = eval_results

ValueError: not enough values to unpack (expected 5, got 3)

Expected behavior
No errormessage

Versions (please complete the following information):

  • OS: Win 11 Education, Build 10.0.22631
  • Python: 3.12.2
  • TensorFlow: 2.16.1

Additional Info
4 cells below, the same error occurs. In addition
model.compile(loss="mse", loss_weights=[0.9, 0.1], optimizer=optimizer, metrics=["RootMeanSquaredError"])
has to be replaced with
model.compile(loss="mse", loss_weights=[0.9, 0.1], optimizer=optimizer, metrics=("RootMeanSquaredError", "RootMeanSquaredError"))
Otherwise, the following error message appears

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[74], line 7
      5 model.norm_layer_wide.adapt(X_train_wide)
      6 model.norm_layer_deep.adapt(X_train_deep)
----> 7 history = model.fit(
      8     (X_train_wide, X_train_deep), (y_train, y_train), epochs=10,
      9     validation_data=((X_valid_wide, X_valid_deep), (y_valid, y_valid)))
     10 eval_results = model.evaluate((X_test_wide, X_test_deep), (y_test, y_test))
     11 weighted_sum_of_losses, main_loss, aux_loss, main_rmse, aux_rmse = eval_results

File C:\Programs\WPy64-31220\python-3.12.2.amd64\Lib\site-packages\keras\src\utils\traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
    119     filtered_tb = _process_traceback_frames(e.__traceback__)
    120     # To get the full stack trace, call:
    121     # `keras.config.disable_traceback_filtering()`
--> 122     raise e.with_traceback(filtered_tb) from None
    123 finally:
    124     del filtered_tb

File C:\Programs\WPy64-31220\python-3.12.2.amd64\Lib\site-packages\keras\src\trainers\compile_utils.py:250, in CompileMetrics._build_metrics_set(self, metrics, num_outputs, output_names, y_true, y_pred, argument_name)
    248 if isinstance(metrics, (list, tuple)):
    249     if len(metrics) != len(y_pred):
--> 250         raise ValueError(
    251             "For a model with multiple outputs, "
    252             f"when providing the `{argument_name}` argument as a "
    253             "list, it should have as many entries as the model has "
    254             f"outputs. Received:\n{argument_name}={metrics}\nof "
    255             f"length {len(metrics)} whereas the model has "
    256             f"{len(y_pred)} outputs."
    257         )
    258     for idx, (mls, yt, yp) in enumerate(
    259         zip(metrics, y_true, y_pred)
    260     ):
    261         if not isinstance(mls, list):

ValueError: For a model with multiple outputs, when providing the `metrics` argument as a list, it should have as many entries as the model has outputs. Received:
metrics=['RootMeanSquaredError']
of length 1 whereas the model has 2 outputs.
@thomas-haslwanter thomas-haslwanter changed the title [BUG] number of output arguments of model.evaluate(...) for multiple outputs [BUG] Chapter 10: number of output arguments of model.evaluate(...) for multiple outputs Jun 17, 2024
@ItsMacto
Copy link

ItsMacto commented Oct 8, 2024

I found a fix to the error. It is due to the Keras version. Google colab uses Keras version: 3.4.1. After some digging I had 3.6.0 locally. This was causing the bug.

To fix the bug run Keras version: 3.4.1
pip install keras==3.4.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants